github – CSS-Tricks https://css-tricks.com Tips, Tricks, and Techniques on using Cascading Style Sheets. Fri, 27 May 2022 00:29:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://i0.wp.com/css-tricks.com/wp-content/uploads/2021/07/star.png?fit=32%2C32&ssl=1 github – CSS-Tricks https://css-tricks.com 32 32 45537868 Beautify GitHub Profile https://css-tricks.com/beautify-github-profile/ https://css-tricks.com/beautify-github-profile/#respond Thu, 26 May 2022 18:13:28 +0000 https://css-tricks.com/?p=366107 It wasn’t long ago that Nick Sypteras showed us how to make custom badges for a GitHub repo. Well, Reza Shakeri put Beautify GitHub Profile together and it’s a huuuuuuge repo of different badges that pulls lots of examples …


Beautify GitHub Profile originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
It wasn’t long ago that Nick Sypteras showed us how to make custom badges for a GitHub repo. Well, Reza Shakeri put Beautify GitHub Profile together and it’s a huuuuuuge repo of different badges that pulls lots of examples together with direct links to the repos you can use to create them.

And it doesn’t stop there! If you’re looking for some sort of embeddable widget, there’s everything from GitHub repo stats and contribution visualizations, all the way to embedded PageSpeed Insights and Spotify playlists. Basically, a big ol’ spot to get some inspiration.

Some things are simply wild!

3D chart of commit history from the Beautify GitHub Profile repo.
I bet Jhey would like to get his hands on those cuboids!

Just scrolling through the repo gives me flashes of the GeoCities days, though. All it needs is a sparkly unicorn and a tiled background image to complete the outfit. 👔

To Shared LinkPermalink on CSS-Tricks


Beautify GitHub Profile originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/beautify-github-profile/feed/ 0 366107
Adding Custom GitHub Badges to Your Repo https://css-tricks.com/adding-custom-github-badges-to-your-repo/ https://css-tricks.com/adding-custom-github-badges-to-your-repo/#comments Tue, 03 May 2022 14:28:00 +0000 https://css-tricks.com/?p=364932 If you’ve spent time looking at open-source repos on GitHub, you’ve probably noticed that most of them use badges in their README files. Take the official React repository, for instance. There are GitHub badges all over the README file that communicate important dynamic info, like the latest released …


Adding Custom GitHub Badges to Your Repo originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
If you’ve spent time looking at open-source repos on GitHub, you’ve probably noticed that most of them use badges in their README files. Take the official React repository, for instance. There are GitHub badges all over the README file that communicate important dynamic info, like the latest released version and whether the current build is passing.

Showing the header of React's repo displaying GitHub badges.

Badges like these provide a nice way to highlight key information about a repository. You can even use your own custom assets as badges, like Next.js does in its repo.

Showing the Next.js repo header with GitHub badges.

But the most useful thing about GitHub badges by far is that they update by themselves. Instead of hardcoding values into your README, badges in GitHub can automatically pick up changes from a remote server.

Let’s discuss how to add dynamic GitHub badges to the README file of your own project. We’ll start by using an online generator called badgen.net to create some basic badges. Then we’ll make our badges dynamic by hooking them up to our own serverless function via Napkin. Finally, we’ll take things one step further by using our own custom SVG files.

Showing three examples of custom GitHub badges including Apprentice, Intermediate, and wizard skill levels.

First off: How do badges work?

Before we start building some badges in GitHub, let’s quickly go over how they are implemented. It’s actually very simple: badges are just images. README files are written in Markdown, and Markdown supports images like so:

!\[alt text\](path or URL to image)

The fact that we can include a URL to an image means that a Markdown page will request the image data from a server when the page is rendered. So, if we control the server that has the image, we can change what image is sent back using whatever logic we want!

Thankfully, we have a couple options to deploy our own server logic without the whole “setting up the server” part. For basic use cases, we can create our GitHub badge images with badgen.net using its predefined templates. And again, Napkin will let us quickly code a serverless function in our browser and then deploy it as an endpoint that our GitHub badges can talk to.

Making badges with Badgen

Let’s start off with the simplest badge solution: a static badge via badgen.net. The Badgen API uses URL patterns to create templated badges on the fly. The URL pattern is as follows:

https://badgen.net/badge/:subject/:status/:color?icon=github

There’s a full list of the options you have for colors, icons, and more on badgen.net. For this example, let’s use these values:

  • :subject : Hello
  • :status: : World
  • :color: : red
  • :icon: : twitter

Our final URL winds up looking like this:

https://badgen.net/badge/hello/world/red?icon=twitter

Adding a GitHub badge to the README file

Now we need to embed this badge in the README file of our GitHub repo. We can do that in Markdown using the syntax we looked at earlier:

!\[my badge\](https://badgen.net/badge/hello/world/red?icon=twitter)

Badgen provides a ton of different options, so I encourage you to check out their site and play around! For instance, one of the templates lets you show the number of times a given GitHub repo has been starred. Here’s a star GitHub badge for the Next.js repo as an example:

https://badgen.net/github/stars/vercel/next.js

Pretty cool! But what if you want your badge to show some information that Badgen doesn’t natively support? Luckily, Badgen has a URL template for using your own HTTPS endpoints to get data:

https://badgen.net/https/url/to/your/endpoint

For example, let’s say we want our badge to show the current price of Bitcoin in USD. All we need is a custom endpoint that returns this data as JSON like this:

{
  "color": "blue",
  "status": "$39,333.7",
  "subject": "Bitcoin Price USD"
}

Assuming our endpoint is available at https://some-endpoint.example.com/bitcoin, we can pass its data to Badgen using the following URL scheme:

https://badgen.net/https/some-endpoint.example.com/bitcoin
GitHub badge. On the left is a gray label with white text. On the right is a blue label with white text showing the price of Bitcoin.
The data for the cost of Bitcoin is served right to the GitHub badge.

Even cooler now! But we still have to actually create the endpoint that provides the data for the GitHub badge. 🤔 Which brings us to…

Badgen + Napkin

There’s plenty of ways to get your own HTTPS endpoint. You could spin up a server with DigitalOcean or AWS EC2, or you could use a serverless option like Google Cloud Functions or AWS Lambda; however, those can all still become a bit complex and tedious for our simple use case. That’s why I’m suggesting Napkin’s in-browser function editor to code and deploy an endpoint without any installs or configuration.

Head over to Napkin’s Bitcoin badge example to see an example endpoint. You can see the code to retrieve the current Bitcoin price and return it as JSON in the editor. You can run the code yourself from the editor or directly use the endpoint.

To use the endpoint with Badgen, work with the same URL scheme from above, only this time with the Napkin endpoint:

https://badgen.net/https/napkin-examples.npkn.net/bitcoin-badge

More ways to customize GitHub badges

Next, let’s fork this function so we can add in our own custom code to it. Click the “Fork” button in the top-right to do so. You’ll be prompted to make an account with Napkin if you’re not already signed in.

Once we’ve successfully forked the function, we can add whatever code we want, using any npm modules we want. Let’s add the Moment.js npm package and update the endpoint response to show the time that the price of Bitcoin was last updated directly in our GitHub badge:

import fetch from 'node-fetch'
import moment from 'moment'

const bitcoinPrice = async () => {
  const res = await fetch("<https://blockchain.info/ticker>")
  const json = await res.json()
  const lastPrice = json.USD.last+""

  const [ints, decimals] = lastPrice.split(".")

  return ints.slice(0, -3) + "," + ints.slice(-3) + "." + decimals
}

export default async (req, res) => {
  const btc = await bitcoinPrice()

  res.json({
    icon: 'bitcoin',
    subject: `Bitcoin Price USD (${moment().format('h:mma')})`,
    color: 'blue',
    status: `\\$${btc}`
  })
}
Deploy the function, update your URL, and now we get this.

You might notice that the badge takes some time to refresh the next time you load up the README file over at GitHub. That’s is because GitHub uses a proxy mechanism to serve badge images.

GitHub serves the badge images this way to prevent abuse, like high request volume or JavaScript code injection. We can’t control GitHub’s proxy, but fortunately, it doesn’t cache too aggressively (or else that would kind of defeat the purpose of badges). In my experience, the TTL is around 5-10 minutes.

OK, final boss time.

Custom SVG badges with Napkin

For our final trick, let’s use Napkin to send back a completely new SVG, so we can use custom images like we saw on the Next.js repo.

A common use case for GitHub badges is showing the current status for a website. Let’s do that. Here are the two states our badge will support:

Badgen doesn’t support custom SVGs, so instead, we’ll have our badge talk directly to our Napkin endpoint. Let’s create a new Napkin function for this called site-status-badge.

The code in this function makes a request to example.com. If the request status is 200, it returns the green badge as an SVG file; otherwise, it returns the red badge. You can check out the function, but I’ll also include the code here for reference:

import fetch from 'node-fetch'

const site_url = "<https://example.com>"

// full SVGs at <https://napkin.io/examples/site-status-badge>
const customUpBadge = ''
const customDownBadge = ''

const isSiteUp = async () => {
  const res = await fetch(site_url)
  return res.ok
}

export default async (req, res) => {
  const forceFail = req.path?.endsWith('/400')

  const healthy = await isSiteUp()
  res.set('content-type', 'image/svg+xml')
  if (healthy && !forceFail) {
    res.send(Buffer.from(customUpBadge).toString('base64'))
  } else {
    res.send(Buffer.from(customDownBadge).toString('base64'))
  }
}

Odds are pretty low that the example.com site will ever go down, so I added the forceFail case to simulate that scenario. Now we can add a /400 after the Napkin endpoint URL to try it:

!\[status up\](https://napkin-examples.npkn.net/site-status-badge/)
!\[status down\](https://napkin-examples.npkn.net/site-status-badge/400)

Very nice 😎


And there we have it! Your GitHub badge training is complete. But the journey is far from over. There’s a million different things where badges like this are super helpful. Have fun experimenting and go make that README sparkle! ✨


Adding Custom GitHub Badges to Your Repo originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/adding-custom-github-badges-to-your-repo/feed/ 3 364932
Open Source & Sustainability https://css-tricks.com/open-source-sustainability/ https://css-tricks.com/open-source-sustainability/#comments Wed, 12 Jan 2022 20:51:49 +0000 https://css-tricks.com/?p=360607 It’s a god-damned miracle to me that open source is as robust as it is in tech. Consider the options. You could have a job (or be entrepreneurial) with your coding skills and likely be paid quite well. Or, you …


Open Source & Sustainability originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
It’s a god-damned miracle to me that open source is as robust as it is in tech. Consider the options. You could have a job (or be entrepreneurial) with your coding skills and likely be paid quite well. Or, you could write code for free and have strangers yell at you every day at all hours. I like being a contributing kinda guy, but I don’t have the stomach for the latter.

Fair enough, in reality, most developers do a bit of coding work on both sides. And clearly, they find some value in doing open-source work; otherwise, they wouldn’t do it. But we’ve all heard the stories. It leads to developer burnout, depression, and countless abandoned projects. It’s like we know how to contribute to an open-source project (and even have some ground rules on etiquette), but lack an understanding of how to maintain it.

Dave, in “Sustaining Maintaining,” thinks it might be a lack of education on how to manage open source:

There’s plenty of write-ups on GitHub about how to start a new open source project, or how to add tooling, but almost no information or best practices on how to maintain a project over years. I think there’s a big education gap and opportunity here. GitHub has an obvious incentive to increase num_developers and num_repos, but I think it’s worthwhile to ease the burden of existing developers and increase the quality and security of existing repos. Open source maintenance needs a manual.

That’s a wonderful idea. I’ve been around tech a hot minute, but I don’t feel particularly knowledgeable about how to operate an open-source project. And frankly, that makes me scared of it, and my fear makes me avoid doing it at all.

I know how to set up the basics, but what if the project blows up in popularity? How do I manage my time commitment do it? How do I handle community disputes? Do I need a request for comments workflow? Who can I trust to help? What are the monetization strategies? What are the security concerns? What do I do when there starts to be dozens, then hundreds, then thousands of open issues? What do I do when I stop caring about this project? How do I stop myself from burning it to the ground?

If there was more education around how to do this well, more examples out there of people doing it well and benefitting from it, and some attempts at guardrails from the places that host them, that would go a long way.

Money is a key factor. Whenever I see success in open source, I see actually usable amounts of money coming in. I see big donations appropriately coming into Vue. I see Automattic building an empire around their core open-source products. I see Greensock having an open-source library but offering membership and a license for certain use cases and having that sustain a team long-term.

If you’re interested in monetizing open-source, Nicholas C. Zakas has been writing about it lately. It’s a three-parter so far, but starts here in “Making your open source project sponsor-ready, Part 1: Companies and trust”:

While it’s possible to bring in a decent amount of money through individual sponsorships, the real path to open source sustainability is to get larger donations from the companies that depend on your project. Getting $5 to $10 each month from a bunch of individuals is nice, but not as nice as getting $1,000 each month from a bunch of companies.

I think it would be cool to see a lot more developers making a proper healthy living on open source. If nothing else it would make me feel like this whole ecosystem is more stable.


Update: I wrote this before the whole Marak Squire kerfuffle, but I feel that just underscores all this.


Open Source & Sustainability originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/open-source-sustainability/feed/ 1 360607
Generate a Pull Request of Static Content With a Simple HTML Form https://css-tricks.com/generate-a-pull-request-of-static-content-with-a-simple-html-form/ https://css-tricks.com/generate-a-pull-request-of-static-content-with-a-simple-html-form/#comments Tue, 16 Nov 2021 15:37:50 +0000 https://css-tricks.com/?p=356491 Jamstack has been in the website world for years. Static Site Generators (SSGs) — which often have content that lives right within a GitHub repo itself — are a big part of that story. That opens up the idea of …


Generate a Pull Request of Static Content With a Simple HTML Form originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Jamstack has been in the website world for years. Static Site Generators (SSGs) — which often have content that lives right within a GitHub repo itself — are a big part of that story. That opens up the idea of having contributors that can open pull requests to add, change, or edit content. Very useful!

Examples of this are like:

Why built with a static site approach?

When we need to build content-based sites like this, it’s common to think about what database to use. Keeping content in a database is a time-honored good idea. But it’s not the only approach! SSGs can be a great alternative because…

  • They are cheap and easy to deploy. SSGs are usually free, making them great for an MVP or a proof of concept.
  • They have great security. There is nothing to hack through the browser, as all the site contains is often just static files.
  • You’re ready to scale. The host you’re already on can handle it.

There is another advantage for us when it comes to a content site. The content of the site itself can be written in static files right in the repo. That means that adding and updating content can happen right from pull requests on GitHub, for example. Even for the non-technically inclined, it opens the door to things like Netlify CMS and the concept of open authoring, allowing for community contributions.

But let’s go the super lo-fi route and embrace the idea of pull requests for content, using nothing more than basic HTML.

The challenge

How people contribute adding or updating a resource isn’t always perfectly straightforward. People need to understand how to fork your repository, how to and where to add their content, content formatting standards, required fields, and all sorts of stuff. They might even need to “spin up” the site themselves locally to ensure the content looks right.

People who seriously want to help our site sometimes will back off because the process of contributing is a technological hurdle and learning curve — which is sad.

You know what anybody can do? Use a <form>

Just like a normal website, the easy way for people to submit a content is to fill out a form and submit it with the content they want.

What if we can make a way for users to contribute content to our sites by way of nothing more than an HTML <form> designed to take exactly the content we need? But instead of the form posting to a database, it goes the route of a pull request against our static site generator? There is a trick!

The trick: Create a GitHub pull request with query parameters

Here’s a little known trick: We can pre-fill a pull request against our repository by adding query parameter to a special GitHub URL. This comes right from the GitHub docs themselves.

Let’s reverse engineer this.

If we know we can pre-fill a link, then we need to generate the link. We’re trying to make this easy remember. To generate this dynamic data-filled link, we’ll use a touch of JavaScript.

So now, how do we generate this link after the user submits the form?

Demo time!

Let’s take the Serverless site from CSS-Tricks as an example. Currently, the only way to add a new resource is by forking the repo on GitHub and adding a new Markdown file. But let’s see how we can do it with a form instead of jumping through those hoops.

The Serverless site itself has many categories (e.g. for forms) we can contribute to. For the sake of simplicity, let’s focus on the “Resources” category. People can add articles about things related to Serverless or Jamstack from there.

The Resources page of the CSS-Tricks Serverless site. The site is primarily purple in varying shades with accents of orange. The page shows a couple of serverless resources in the main area and a list of categories in the right sidebar.

All of the resource files are in this folder in the repo.

Showing the main page of the CSS-Tricks Serverless repo in GitHub, displaying all the files.

Just picking a random file from there to explore the structure…

---
title: "How to deploy a custom domain with the Amplify Console"
url: "https://read.acloud.guru/how-to-deploy-a-custom-domain-with-the-amplify-console-a884b6a3c0fc"
author: "Nader Dabit"
tags: ["hosting", "amplify"]
---

In this tutorial, we’ll learn how to add a custom domain to an Amplify Console deployment in just a couple of minutes.

Looking over that content, our form must have these columns:

  • Title
  • URL
  • Author
  • Tags
  • Snippet or description of the link.

So let’s build an HTML form for all those fields:

<div class="columns container my-2">
  <div class="column is-half is-offset-one-quarter">
  <h1 class="title">Contribute to Serverless Resources</h1>

  <div class="field">
    <label class="label" for="title">Title</label>
    <div class="control">
      <input id="title" name="title" class="input" type="text">
    </div>
  </div>
  
  <div class="field">
    <label class="label" for="url">URL</label>
    <div class="control">
      <input id="url" name="url" class="input" type="url">
    </div>
  </div>
    
  <div class="field">
    <label class="label" for="author">Author</label>
    <div class="control">
      <input id="author" class="input" type="text" name="author">
    </div>
  </div>
  
  <div class="field">
    <label class="label" for="tags">Tags (comma separated)</label>
    <div class="control">
      <input id="tags" class="input" type="text" name="tags">
    </div>
  </div>
    
  <div class="field">
    <label class="label" for="description">Description</label>
    <div class="control">
      <textarea id="description" class="textarea" name="description"></textarea>
    </div>
  </div>
  
   <!-- Prepare the JavaScript function for later -->
  <div class="control">
    <button onclick="validateSubmission();" class="button is-link is-fullwidth">Submit</button>
  </div>
    
  </div>
</div>

I’m using Bulma for styling, so the class names in use here are from that.

Now we write a JavaScript function that transforms a user’s input into a friendly URL that we can combine as GitHub query parameters on our pull request. Here is the step by step:

  • Get the user’s input about the content they want to add
  • Generate a string from all that content
  • Encode the string to format it in a way that humans can read
  • Attach the encoded string to a complete URL pointing to GitHub’s page for new pull requests

Here is the Pen:

After pressing the Submit button, the user is taken right to GitHub with an open pull request for this new file in the right location.

GitHub pull request screen showing a new file with content.

Quick caveat: Users still need a GitHub account to contribute. But this is still much easier than having to know how to fork a repo and create a pull request from that fork.

Other benefits of this approach

Well, for one, this is a form that lives on our site. We can style it however we want. That sort of control is always nice to have.

Secondly, since we’ve already written the JavaScript, we can use the same basic idea to talk with other services or APIs in order to process the input first. For example, if we need information from a website (like the title, meta description, or favicon) we can fetch this information just by providing the URL.

Taking things further

Let’s have a play with that second point above. We could simply pre-fill our form by fetching information from the URL provided for the user rather than having them have to enter it by hand.

With that in mind, let’s now only ask the user for two inputs (rather than four) — just the URL and tags.

How does this work? We can fetch meta information from a website with JavaScript just by having the URL. There are many APIs that fetch information from a website, but you might the one that I built for this project. Try hitting any URL like this:

https://metadata-api.vercel.app/api?url=https://css-tricks.com

The demo above uses that as an API to pre-fill data based on the URL the user provides. Easier for the user!

Wrapping up

You could think of this as a very minimal CMS for any kind of Static Site Generator. All you need to do is customize the form and update the pre-filled query parameters to match the data formats you need.

How will you use this sort of thing? The four sites we saw at the very beginning are good examples. But there are so many other times where might need to do something with a user submission, and this might be a low-lift way to do it.


Generate a Pull Request of Static Content With a Simple HTML Form originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/generate-a-pull-request-of-static-content-with-a-simple-html-form/feed/ 1 356491
From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo https://css-tricks.com/from-a-single-repo-to-multi-repos-to-monorepo-to-multi-monorepo/ https://css-tricks.com/from-a-single-repo-to-multi-repos-to-monorepo-to-multi-monorepo/#comments Tue, 17 Aug 2021 14:53:41 +0000 https://css-tricks.com/?p=346449 I’ve been working on the same project for several years. Its initial version was a huge monolithic app containing thousands of files. It was poorly architected and non-reusable, but was hosted in a single repo making it easy to work …


From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I’ve been working on the same project for several years. Its initial version was a huge monolithic app containing thousands of files. It was poorly architected and non-reusable, but was hosted in a single repo making it easy to work with. Later, I “fixed” the mess in the project by splitting the codebase into autonomous packages, hosting each of them on its own repo, and managing them with Composer. The codebase became properly architected and reusable, but being split across multiple repos made it a lot more difficult to work with.

As the code was reformatted time and again, its hosting in the repo also had to adapt, going from the initial single repo, to multiple repos, to a monorepo, to what may be called a “multi-monorepo.”

Let me take you on the journey of how this took place, explaining why and when I felt I had to switch to a new approach. The journey consists of four stages (so far!) so let’s break it down like that.

Stage 1: Single repo

The project is leoloso/PoP and it’s been through several hosting schemes, following how its code was re-architected at different times.

It was born as this WordPress site, comprising a theme and several plugins. All of the code was hosted together in the same repo.

Some time later, I needed another site with similar features so I went the quick and easy way: I duplicated the theme and added its own custom plugins, all in the same repo. I got the new site running in no time.

I did the same for another site, and then another one, and another one. Eventually the repo was hosting some 10 sites, comprising thousands of files.

A single repository hosting all our code.

Issues with the single repo

While this setup made it easy to spin up new sites, it didn’t scale well at all. The big thing is that a single change involved searching for the same string across all 10 sites. That was completely unmanageable. Let’s just say that copy/paste/search/replace became a routine thing for me.

So it was time to start coding PHP the right way.

Stage 2: Multirepo

Fast forward a couple of years. I completely split the application into PHP packages, managed via Composer and dependency injection.

Composer uses Packagist as its main PHP package repository. In order to publish a package, Packagist requires a composer.json file placed at the root of the package’s repo. That means we are unable to have multiple PHP packages, each of them with its own composer.json hosted on the same repo.

As a consequence, I had to switch from hosting all of the code in the single leoloso/PoP repo, to using multiple repos, with one repo per PHP package. To help manage them, I created the organization “PoP” in GitHub and hosted all repos there, including getpop/root, getpop/component-model, getpop/engine, and many others.

In the multirepo, each package is hosted on its own repo.

Issues with the multirepo

Handling a multirepo can be easy when you have a handful of PHP packages. But in my case, the codebase comprised over 200 PHP packages. Managing them was no fun.

The reason that the project was split into so many packages is because I also decoupled the code from WordPress (so that these could also be used with other CMSs), for which every package must be very granular, dealing with a single goal.

Now, 200 packages is not ordinary. But even if a project comprises only 10 packages, it can be difficult to manage across 10 repositories. That’s because every package must be versioned, and every version of a package depends on some version of another package. When creating pull requests, we need to configure the composer.json file on every package to use the corresponding development branch of its dependencies. It’s cumbersome and bureaucratic.

I ended up not using feature branches at all, at least in my case, and simply pointed every package to the dev-master version of its dependencies (i.e. I was not versioning packages). I wouldn’t be surprised to learn that this is a common practice more often than not.

There are tools to help manage multiple repos, like meta. It creates a project composed of multiple repos and doing git commit -m "some message" on the project executes a git commit -m "some message" command on every repo, allowing them to be in sync with each other.

However, meta will not help manage the versioning of each dependency on their composer.json file. Even though it helps alleviate the pain, it is not a definitive solution.

So, it was time to bring all packages to the same repo.

Stage 3: Monorepo

The monorepo is a single repo that hosts the code for multiple projects. Since it hosts different packages together, we can version control them together too. This way, all packages can be published with the same version, and linked across dependencies. This makes pull requests very simple.

The monorepo hosts multiple packages.

As I mentioned earlier, we are not able to publish PHP packages to Packagist if they are hosted on the same repo. But we can overcome this constraint by decoupling development and distribution of the code: we use the monorepo to host and edit the source code, and multiple repos (at one repo per package) to publish them to Packagist for distribution and consumption.

The monorepo hosts the source code, multiple repos distribute it.

Switching to the Monorepo

Switching to the monorepo approach involved the following steps:

First, I created the folder structure in leoloso/PoP to host the multiple projects. I decided to use a two-level hierarchy, first under layers/ to indicate the broader project, and then under packages/, plugins/, clients/ and whatnot to indicate the category.

Showing the HitHub repo for a project called PoP. The screen in is dark mode, so the background is near black and the text is off-white, except for blue links.
The monorepo layers indicate the broader project.

Then, I copied all source code from all repos (getpop/engine, getpop/component-model, etc.) to the corresponding location for that package in the monorepo (i.e. layers/Engine/packages/engine, layers/Engine/packages/component-model, etc).

I didn’t need to keep the Git history of the packages, so I just copied the files with Finder. Otherwise, we can use hraban/tomono or shopsys/monorepo-tools to port repos into the monorepo, while preserving their Git history and commit hashes.

Next, I updated the description of all downstream repos, to start with [READ ONLY], such as this one.

Showing the GitHub repo for the component-model project. The screen is in dark mode, so the background is near black and the text is off-white, except for blue links. There is a sidebar to the right of the screen that is next to the list of files in the repo. The sidebar has an About heading with a description that reads: Read only, component model for Pop, over which the component-based architecture is based." This is highlighted in red.
The downstream repo’s “READ ONLY” is located in the repo description.

I executed this task in bulk via GitHub’s GraphQL API. I first obtained all of the descriptions from all of the repos, with this query:

{
  repositoryOwner(login: "getpop") {
    repositories(first: 100) {
      nodes {
        id
        name
        description
      }
    }
  }
}

…which returned a list like this:

{
  "data": {
    "repositoryOwner": {
      "repositories": {
        "nodes": [
          {
            "id": "MDEwOlJlcG9zaXRvcnkxODQ2OTYyODc=",
            "name": "hooks",
            "description": "Contracts to implement hooks (filters and actions) for PoP"
          },
          {
            "id": "MDEwOlJlcG9zaXRvcnkxODU1NTQ4MDE=",
            "name": "root",
            "description": "Declaration of dependencies shared by all PoP components"
          },
          {
            "id": "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk=",
            "name": "engine",
            "description": "Engine for PoP"
          }
        ]
      }
    }
  }
}

From there, I copied all descriptions, added [READ ONLY] to them, and for every repo generated a new query executing the updateRepository GraphQL mutation:

mutation {
  updateRepository(
    input: {
      repositoryId: "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk="
      description: "[READ ONLY] Engine for PoP"
    }
  ) {
    repository {
      description
    }
  }
}

Finally, I introduced tooling to help “split the monorepo.” Using a monorepo relies on synchronizing the code between the upstream monorepo and the downstream repos, triggered whenever a pull request is merged. This action is called “splitting the monorepo.” Splitting the monorepo can be achieved with a git subtree split command but, because I’m lazy, I’d rather use a tool.

I chose Monorepo builder, which is written in PHP. I like this tool because I can customize it with my own functionality. Other popular tools are the Git Subtree Splitter (written in Go) and Git Subsplit (bash script).

What I like about the Monorepo

I feel at home with the monorepo. The speed of development has improved because dealing with 200 packages feels pretty much like dealing with just one. The boost is most evident when refactoring the codebase, i.e. when executing updates across many packages.

The monorepo also allows me to release multiple WordPress plugins at once. All I need to do is provide a configuration to GitHub Actions via PHP code (when using the Monorepo builder) instead of hard-coding it in YAML.

To generate a WordPress plugin for distribution, I had created a generate_plugins.yml workflow that triggers when creating a release. With the monorepo, I have adapted it to generate not just one, but multiple plugins, configured via PHP through a custom command in plugin-config-entries-json, and invoked like this in GitHub Actions:

- id: output_data
  run: |
    echo "quot;::set-output name=plugin_config_entries::$(vendor/bin/monorepo-builder plugin-config-entries-json)"

This way, I can generate my GraphQL API plugin and other plugins hosted in the monorepo all at once. The configuration defined via PHP is this one.

class PluginDataSource
{
  public function getPluginConfigEntries(): array
  {
    return [
      // GraphQL API for WordPress
      [
        'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-for-wp',
        'zip_file' => 'graphql-api.zip',
        'main_file' => 'graphql-api.php',
        'dist_repo_organization' => 'GraphQLAPI',
        'dist_repo_name' => 'graphql-api-for-wp-dist',
      ],
      // GraphQL API - Extension Demo
      [
        'path' => 'layers/GraphQLAPIForWP/plugins/extension-demo',
        'zip_file' => 'graphql-api-extension-demo.zip',
        'main_file' =>; 'graphql-api-extension-demo.php',
        'dist_repo_organization' => 'GraphQLAPI',
        'dist_repo_name' => 'extension-demo-dist',
      ],
    ];
  }
}

When creating a release, the plugins are generated via GitHub Actions.

Dark mode screen in GitHub showing the actions for the project.
This figure shows plugins generated when a release is created.

If, in the future, I add the code for yet another plugin to the repo, it will also be generated without any trouble. Investing some time and energy producing this setup now will definitely save plenty of time and energy in the future.

Issues with the Monorepo

I believe the monorepo is particularly useful when all packages are coded in the same programming language, tightly coupled, and relying on the same tooling. If instead we have multiple projects based on different programming languages (such as JavaScript and PHP), composed of unrelated parts (such as the main website code and a subdomain that handles newsletter subscriptions), or tooling (such as PHPUnit and Jest), then I don’t believe the monorepo provides much of an advantage.

That said, there are downsides to the monorepo:

  • We must use the same license for all of the code hosted in the monorepo; otherwise, we’re unable to add a LICENSE.md file at the root of the monorepo and have GitHub pick it up automatically. Indeed, leoloso/PoP initially provided several libraries using MIT and the plugin using GPLv2. So, I decided to simplify it using the lowest common denominator between them, which is GPLv2.
  • There is a lot of code, a lot of documentation, and plenty of issues, all from different projects. As such, potential contributors that were attracted to a specific project can easily get confused.
  • When tagging the code, all packages are versioned independently with that tag whether their particular code was updated or not. This is an issue with the Monorepo builder and not necessarily with the monorepo approach (Symfony has solved this problem for its monorepo).
  • The issues board needs proper management. In particular, it requires labels to assign issues to the corresponding project, or risk it becoming chaotic.
Showing the list of reported issues for the project in GitHub in dark mode. The image shows just how crowded and messy the screen looks when there are a bunch of issues from different projects in the same list without a way to differentiate them.
The issues board can become chaotic without labels that are associated with projects.

All these issues are not roadblocks though. I can cope with them. However, there is an issue that the monorepo cannot help me with: hosting both public and private code together.

I’m planning to create a “PRO” version of my plugin which I plan to host in a private repo. However, the code in the repo is either public or private, so I’m unable to host my private code in the public leoloso/PoP repo. At the same time, I want to keep using my setup for the private repo too, particularly the generate_plugins.yml workflow (which already scopes the plugin and downgrades its code from PHP 8.0 to 7.1) and its possibility to configure it via PHP. And I want to keep it DRY, avoiding copy/pastes.

It was time to switch to the multi-monorepo.

Stage 4: Multi-monorepo

The multi-monorepo approach consists of different monorepos sharing their files with each other, linked via Git submodules. At its most basic, a multi-monorepo comprises two monorepos: an autonomous upstream monorepo, and a downstream monorepo that embeds the upstream repo as a Git submodule that’s able to access its files:

A giant red folder illustration is labeled as the downstream monorepo and it contains a smaller green folder showing the upstream monorepo.
The upstream monorepo is contained within the downstream monorepo.

This approach satisfies my requirements by:

  • having the public repo leoloso/PoP be the upstream monorepo, and
  • creating a private repo leoloso/GraphQLAPI-PRO that serves as the downstream monorepo.
The same illustration as before, but now the large folder is a bright pink and is labeled as with the project name, and the smaller folder is a purplish-blue and labeled with the name of the public downstream module,.
A private monorepo can access the files from a public monorepo.

leoloso/GraphQLAPI-PRO embeds leoloso/PoP under subfolder submodules/PoP (notice how GitHub links to the specific commit of the embedded repo):

This figure show how the public monorepo is embedded within the private monorepo in the GitHub project.

Now, leoloso/GraphQLAPI-PRO can access all the files from leoloso/PoP. For instance, script ci/downgrade/downgrade_code.sh from leoloso/PoP (which downgrades the code from PHP 8.0 to 7.1) can be accessed under submodules/PoP/ci/downgrade/downgrade_code.sh.

In addition, the downstream repo can load the PHP code from the upstream repo and even extend it. This way, the configuration to generate the public WordPress plugins can be overridden to produce the PRO plugin versions instead:

class PluginDataSource extends UpstreamPluginDataSource
{
  public function getPluginConfigEntries(): array
  {
    return [
      // GraphQL API PRO
      [
        'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-pro',
        'zip_file' => 'graphql-api-pro.zip',
        'main_file' => 'graphql-api-pro.php',
        'dist_repo_organization' => 'GraphQLAPI-PRO',
        'dist_repo_name' => 'graphql-api-pro-dist',
      ],
      // GraphQL API Extensions
      // Google Translate
      [
        'path' => 'layers/GraphQLAPIForWP/plugins/google-translate',
        'zip_file' => 'graphql-api-google-translate.zip',
        'main_file' => 'graphql-api-google-translate.php',
        'dist_repo_organization' => 'GraphQLAPI-PRO',
        'dist_repo_name' => 'graphql-api-google-translate-dist',
      ],
      // Events Manager
      [
        'path' => 'layers/GraphQLAPIForWP/plugins/events-manager',
        'zip_file' => 'graphql-api-events-manager.zip',
        'main_file' => 'graphql-api-events-manager.php',
        'dist_repo_organization' => 'GraphQLAPI-PRO',
        'dist_repo_name' => 'graphql-api-events-manager-dist',
      ],
    ];
  }
}

GitHub Actions will only load workflows from under .github/workflows, and the upstream workflows are under submodules/PoP/.github/workflows; hence we need to copy them. This is not ideal, though we can avoid editing the copied workflows and treat the upstream files as the single source of truth.

To copy the workflows over, a simple Composer script can do:

{
  "scripts": {
    "copy-workflows": [
      "php -r \"copy('submodules/PoP/.github/workflows/generate_plugins.yml', '.github/workflows/generate_plugins.yml');\"",
      "php -r \"copy('submodules/PoP/.github/workflows/split_monorepo.yaml', '.github/workflows/split_monorepo.yaml');\""
    ]
  }
}

Then, each time I edit the workflows in the upstream monorepo, I also copy them to the downstream monorepo by executing the following command:

composer copy-workflows

Once this setup is in place, the private repo generates its own plugins by reusing the workflow from the public repo:

This figure shows the PRO plugins generated in GitHub Actions.

I am extremely satisfied with this approach. I feel it has removed all of the burden from my shoulders concerning the way projects are managed. I read about a WordPress plugin author complaining that managing the releases of his 10+ plugins was taking a considerable amount of time. That doesn’t happen here—after I merge my pull request, both public and private plugins are generated automatically, like magic.

Issues with the multi-monorepo

First off, it leaks. Ideally, leoloso/PoP should be completely autonomous and unaware that it is used as an upstream monorepo in a grander scheme—but that’s not the case.

When doing git checkout, the downstream monorepo must pass the --recurse-submodules option as to also checkout the submodules. In the GitHub Actions workflows for the private repo, the checkout must be done like this:

- uses: actions/checkout@v2
  with:
    submodules: recursive

As a result, we have to input submodules: recursive to the downstream workflow, but not to the upstream one even though they both use the same source file.

To solve this while maintaining the public monorepo as the single source of truth, the workflows in leoloso/PoP are injected the value for submodules via an environment variable CHECKOUT_SUBMODULES, like this:

env:
  CHECKOUT_SUBMODULES: "";

jobs:
  provide_data:
    steps:
      - uses: actions/checkout@v2
        with:
          submodules: ${{ env.CHECKOUT_SUBMODULES }}

The environment value is empty for the upstream monorepo, so doing submodules: "" works well. And then, when copying over the workflows from upstream to downstream, I replace the value of the environment variable to "recursive" so that it becomes:

env:
  CHECKOUT_SUBMODULES: "recursive"

(I have a PHP command to do the replacement, but we could also pipe sed in the copy-workflows composer script.)

This leakage reveals another issue with this setup: I must review all contributions to the public repo before they are merged, or they could break something downstream. The contributors would also completely unaware of those leakages (and they couldn’t be blamed for it). This situation is specific to the public/private-monorepo setup, where I am the only person who is aware of the full setup. While I share access to the public repo, I am the only one accessing the private one.

As an example of how things could go wrong, a contributor to leoloso/PoP might remove CHECKOUT_SUBMODULES: "" since it is superfluous. What the contributor doesn’t know is that, while that line is not needed, removing it will break the private repo.

I guess I need to add a warning!

env:
  ### ☠️ Do not delete this line! Or bad things will happen! ☠️
  CHECKOUT_SUBMODULES: ""

Wrapping up

My repo has gone through quite a journey, being adapted to the new requirements of my code and application at different stages:

  • It started as a single repo, hosting a monolithic app.
  • It became a multirepo when splitting the app into packages.
  • It was switched to a monorepo to better manage all the packages.
  • It was upgraded to a multi-monorepo to share files with a private monorepo.

Context means everything, so there is no “best” approach here—only solutions that are more or less suitable to different scenarios.

Has my repo reached the end of its journey? Who knows? The multi-monorepo satisfies my current requirements, but it hosts all private plugins together. If I ever need to grant contractors access to a specific private plugin, while preventing them to access other code, then the monorepo may no longer be the ideal solution for me, and I’ll need to iterate again.

I hope you have enjoyed the journey. And, if you have any ideas or examples from your own experiences, I’d love to hear about them in the comments.


From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/from-a-single-repo-to-multi-repos-to-monorepo-to-multi-monorepo/feed/ 6 346449
GitHub Explains the Open Graph Images https://css-tricks.com/github-explains-the-open-graph-images/ https://css-tricks.com/github-explains-the-open-graph-images/#comments Thu, 29 Jul 2021 19:49:32 +0000 https://css-tricks.com/?p=345518 An explanation of those new GitHub social media images:

[…] our custom Open Graph image service is a little Node.js app that uses the GitHub GraphQL API to collect data, generates some HTML from a template, and pipes it to 


GitHub Explains the Open Graph Images originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
An explanation of those new GitHub social media images:

[…] our custom Open Graph image service is a little Node.js app that uses the GitHub GraphQL API to collect data, generates some HTML from a template, and pipes it to Puppeteer to “take a screenshot” of that HTML.

Jason Etcovich on The GitHub Blog in “A framework for building Open Graph images”

It’s so satisfying to produce templated images from HTML and CSS. It’s the perfect way to do social media images. If you’re doing it at scale like GitHub, there are a couple of nice tricks in here for speeding it up.

To Shared LinkPermalink on CSS-Tricks


GitHub Explains the Open Graph Images originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/github-explains-the-open-graph-images/feed/ 1 345518
Custom Styles in GitHub Readme Files https://css-tricks.com/custom-styles-in-github-readmes/ https://css-tricks.com/custom-styles-in-github-readmes/#comments Wed, 23 Dec 2020 18:50:17 +0000 https://css-tricks.com/?p=331129 Even though GitHub Readme files (typically ./readme.md) are Markdown, and although Markdown supports HTML, you can’t put <style> or <script> tags init. (Well, you can, they just get stripped.) So you can’t apply custom styles there. Or can you? …


Custom Styles in GitHub Readme Files originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Even though GitHub Readme files (typically ./readme.md) are Markdown, and although Markdown supports HTML, you can’t put <style> or <script> tags init. (Well, you can, they just get stripped.) So you can’t apply custom styles there. Or can you?

  1. You can use SVG as an <img src="./file.svg" alt="" /> (anywhere).
  2. When used that way, even stuff like animations within them play (wow).
  3. SVG has stuff like <text> for textual content, but also <foreignObject> for regular ol’ HTML content.
  4. SVG support <style> tags.
  5. Your readme.md file does support <img> with SVG sources.

Sindre Sorhus combined all that into an example.

That same SVG source will work here:


Custom Styles in GitHub Readme Files originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/custom-styles-in-github-readmes/feed/ 5 331129
Upptime https://css-tricks.com/upptime/ https://css-tricks.com/upptime/#comments Wed, 18 Nov 2020 15:48:12 +0000 https://css-tricks.com/?p=325833 GitHub Actions are like free computers.

Well, there is pricing, but even free plans get 2,000 minutes a month. You write configuration files for what you want these computers to do. Those configuration files go into a repo, so …


Upptime originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
GitHub Actions are like free computers.

Well, there is pricing, but even free plans get 2,000 minutes a month. You write configuration files for what you want these computers to do. Those configuration files go into a repo, so typically they do things specific to that repo. I’m sure that CI/CD is vast majority of GitHub Actions usage. That is, running your tests, and deploying your code. Which is absolutely fantastic.

But like I said, GitHub Actions are computers, so you can have them run whatever code you like. (I’m sure there is EULA stuff you are bound to, but you know what I mean.) Just like everybody’s favorite, serverless functions, GitHub Actions can do that same stuff. Wanna run a build process? Hit an API? Optimize images? Screenshot a URL? Do it up. Most actions are tied to specific events, like “run this code when I commit to a branch” or “run this code against this pull request.” But you can also schedule them on a cron schedule.

So you’ve got a free computer for 2,000 minutes a month you can run on a schedule. I’m sure that will breed some pretty interesting creativity, especially since GitHub Actions is a marketplace. Allow me to get around to the title of this post… I find Upptime an incredible clever usage of all this. You essentially get a free configurable uptime monitor for whatever you want.

To Shared LinkPermalink on CSS-Tricks


Upptime originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/upptime/feed/ 1 325833
Optimize Images with a GitHub Action https://css-tricks.com/optimize-images-with-a-github-action/ https://css-tricks.com/optimize-images-with-a-github-action/#comments Thu, 20 Aug 2020 21:56:06 +0000 https://css-tricks.com/?p=319488 I was playing with GitHub Actions the other day. Such a nice tool! Short story: you can have it run code for you, like run your build processes, tests, and deployments. But it’s just configuration files that can run whatever …


Optimize Images with a GitHub Action originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I was playing with GitHub Actions the other day. Such a nice tool! Short story: you can have it run code for you, like run your build processes, tests, and deployments. But it’s just configuration files that can run whatever you need. There is a whole marketplace of Actions wanting to do work for you.

What I wanted to do was run code to do image optimization. That way I never have to think about it. Any image in the repo has been optimized.

There is an action for this already, Calibre’s image-actions, which we’ll leverage here. You’ll also need to ensure Actions is enabled for the repo. I know in my main organization we only flip on Actions on a per-repo basis, which is one of the options.

Then you make a file at ./github/workflows/optimize-images.yml. That’s where you can configure this action. All your actions can have separate files, if you want them to. I made this a separate file because (1) it only works with “pushes to pull requests,” so if you have other actions that run on different triggers, they won’t mix nicely, and (2) That’s what is in their docs and looks like the suggested usage.

name: Optimize images
on: pull_request
jobs:
  build:
    name: calibreapp/image-actions
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repo
        uses: actions/checkout@master

      - name: Compress Images
        uses: calibreapp/image-actions@master
        with:
          githubToken: ${{ secrets.GITHUB_TOKEN }}

Now if you make a pull request, you’ll see it run:

That successful run then leaves a comment on the pull request saying what it was able to optimize:

It will literally re-commit those files back to the pull request as well, so if you’re going to stay on the pull request and keep working, you’ll need to push again before you can push to get the optimized images.

I can look at that automatic commit and see the difference:

The commit preview in Git Tower.

How I can merge the PR knowing all is well:

Pretty cool. Is optimizing your images locally particularly hard? No. Is never having to think about it again better? Yeah. You’re taking on a smidge of technical debt here, but reducing it elsewhere, which is a very fair trade, at least in my book.


Optimize Images with a GitHub Action originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/optimize-images-with-a-github-action/feed/ 4 319488
The GitHub Profile Trick https://css-tricks.com/the-github-profile-trick/ https://css-tricks.com/the-github-profile-trick/#comments Tue, 28 Jul 2020 00:19:59 +0000 https://css-tricks.com/?p=317762 Monica Powell shared a really cool trick the other day:

The profile README is created by creating a new repository that’s the same name as your username. For example, my GitHub username is m0nica so I created a new repository


The GitHub Profile Trick originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Monica Powell shared a really cool trick the other day:

The profile README is created by creating a new repository that’s the same name as your username. For example, my GitHub username is m0nica so I created a new repository with the name m0nica.

Now the README.md from that repo is essentially the homepage of her profile. Above the usual list of popular repos, you can see the rendered version of that README on her profile:

Lemme do a super simple version for myself real quick just to try it out…

OK, I start like this:

Screenshot of the default profile page for Chris Coyier.

Then I’ll to go repo.new (hey, CodePen has one of those cool domains too!) and make a repo on my personal account that is exactly the same as my username:

Screenshot showing the create new repo screen on GitHub. The repository name is set to chriscoyier.

I chose to initialize the repo with a README file and nothing else. So immediately I get:

Screenshot of the code section of the chriscoyier repo, which only contains a read me file that says hi there.

I can edit this directly on the web, and if I do, I see more helpful stuff:

Screenshot of editing the read me file directly in GitHub.

Fortunately, my personal website has a Markdown bio ready to use!

Screenshot of Chris Coyier's personal website homepage. It has a dark background and a large picture of Chris wearing a red CodePen hat next to some text welcoming people to the site.

I’ll copy and paste that over.

Screenshot showing the Markdown code from the personal website in the GitHub editor.

After committing that change, my own profile shows it!

Screenshot of the updated GitHub profile page, showing the welcome text from the personal website homepage.

Maybe I’ll get around to doing something more fun with it someday. Monica’s post has a bunch of fun examples in it. My favorite is Kaya Thomas’ profile, which I saw Jina Anne share:

You can’t use CSS in there (because GitHub strips it out), so I love the ingenuity of using old school <img align="right"> to pull off the floating image look.


The GitHub Profile Trick originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/the-github-profile-trick/feed/ 14 317762
Using GitHub Template Repos to Jump-Start Static Site Projects https://css-tricks.com/using-github-template-repos-to-jump-start-static-site-projects/ https://css-tricks.com/using-github-template-repos-to-jump-start-static-site-projects/#comments Fri, 04 Oct 2019 14:18:23 +0000 https://css-tricks.com/?p=296323 If you’re getting started with static site generators, did you know you can use GitHub template repositories to quickly start new projects and reduce your setup time?

Most static site generators make installation easy, but each project still requires …


Using GitHub Template Repos to Jump-Start Static Site Projects originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
If you’re getting started with static site generators, did you know you can use GitHub template repositories to quickly start new projects and reduce your setup time?

Most static site generators make installation easy, but each project still requires configuration after installation. When you build a lot of similar projects, you may duplicate effort during the setup phase. GitHub template repositories may save you a lot of time if you find yourself:

  • creating the same folder structures from previous projects,
  • copying and pasting config files from previous projects, and
  • copying and pasting boilerplate code from previous projects.

Unlike forking a repository, which allows you to use someone else’s code as a starting point, template repositories allow you to use your own code as a starting point, where each new project gets its own, independent Git history. Check it out!

Let’s take a look at how we can set up a convenient workflow. We’ll set up a boilerplate Eleventy project, turn it into a Git repository, host the repository on GitHub, and then configure that repository to be a template. Then, next time you have a static site project, you’ll be able to come back to the repository, click a button, and start working from an exact copy of your boilerplate.

Are you ready to try it out? Let’s set up our own static site using GitHub templates to see just how much templates can help streamline a static site project.

I’m using Eleventy as an example of a static site generator because it’s my personal go-to, but this process will work for Hugo, Jekyll, Nuxt, or any other flavor of static site generator you prefer.

If you want to see the finished product, check out my static site template repository.

First off, let’s create a template folder

We’re going to kick things off by running each of these in the command line:

cd ~
mkdir static-site-template
cd static-site-template

These three commands change directory into your home directory (~ in Unix-based systems), make a new directory called static-site-template, and then change directory into the static-site-template directory.

Next, we’ll initialize the Node project

In order to work with Eleventy, we need to install Node.js which allows your computer to run JavaScript code outside of a web browser.

Node.js comes with node package manager, or npm, which downloads node packages to your computer. Eleventy is a node package, so we can use npm to fetch it.

Assuming Node.js is installed, let’s head back to the command line and run:

npm init

This creates a file called package.json in the directory. npm will prompt you for a series of questions to fill out the metadata in your package.json. After answering the questions, the Node.js project is initialized.

Now we can install Eleventy

Initializing the project gave us a package.json file which lets npm install packages, run scripts, and do other tasks for us inside that project. npm uses package.json as an entry point in the project to figure out precisely how and what it should do when we give it commands.

We can tell npm to install Eleventy as a development dependency by running:

npm install -D @11ty/eleventy

This will add a devDependency entry to the package.json file and install the Eleventy package to a node_modules folder in the project.

The cool thing about the package.json file is that any other computer with Node.js and npm can read it and know to install Eleventy in the project node_modules directory without having to install it manually. See, we’re already streamlining things!

Configuring Eleventy

There are tons of ways to configure an Eleventy project. Flexibility is Eleventy’s strength. For the purposes of this tutorial, I’m going to demonstrate a configuration that provides:

  • A folder to cleanly separate website source code from overall project files
  • An HTML document for a single page website
  • CSS to style the document
  • JavaScript to add functionality to the document

Hop back in the command line. Inside the static-site-template folder, run these commands one by one (excluding the comments that appear after each # symbol):

mkdir src           # creates a directory for your website source code
mkdir src/css       # creates a directory for the website styles
mkdir src/js        # creates a directory for the website JavaScript
touch index.html    # creates the website HTML document
touch css/style.css # creates the website styles
touch js/main.js    # creates the website JavaScript

This creates the basic file structure that will inform the Eleventy build. However, if we run Eleventy right now, it won’t generate the website we want. We still have to configure Eleventy to understand that it should only use files in the src folder for building, and that the css and js folders should be processed with passthrough file copy.

You can give this information to Eleventy through a file called .eleventy.js in the root of the static-site-template folder. You can create that file by running this command inside the static-site-template folder:

touch .eleventy.js

Edit the file in your favorite text editor so that it contains this:

module.exports = function(eleventyConfig) {
  eleventyConfig.addPassthroughCopy("src/css");
  eleventyConfig.addPassthroughCopy("src/js");
  return {
    dir: {
      input: "src"
    }
  };
};

Lines 2 and 3 tell Eleventy to use passthrough file copy for CSS and JavaScript. Line 6 tells Eleventy to use only the src directory to build its output.

Eleventy will now give us the expected output we want. Let’s put that to the test by putting this In the command line:

npx @11ty/eleventy

The npx command allows npm to execute code from the project node_module directory without touching the global environment. You’ll see output like this:

Writing _site/index.html from ./src/index.html.
Copied 2 items and Processed 1 file in 0.04 seconds (v0.9.0)

The static-site-template folder should now have a new directory in it called _site. If you dig into that folder, you’ll find the css and js directories, along with the index.html file.

This _site folder is the final output from Eleventy. It is the entirety of the website, and you can host it on any static web host.

Without any content, styles, or scripts, the generated site isn’t very interesting:

Let’s create a boilerplate website

Next up, we’re going to put together the baseline for a super simple website we can use as the starting point for all projects moving forward.

It’s worth mentioning that Eleventy has a ton of boilerplate files for different types of projects. It’s totally fine to go with one of these though I often find I wind up needing to roll my own. So that’s what we’re doing here.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <title>Static site template</title>
    <meta name="description" content="A static website">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="css/style.css">
  </head>
  <body>
  <h1>Great job making your website template!</h1>
  <script src="js/main.js"></script>
  </body>
</html>

We may as well style things a tiny bit, so let’s add this to src/css/style.css:

body {
  font-family: sans-serif;
}

And we can confirm JavaScript is hooked up by adding this to src/js/main.js:

(function() {
  console.log('Invoke the static site template JavaScript!');
})();

Want to see what we’ve got? Run npx @11ty/eleventy --serve in the command line. Eleventy will spin up a server with Browsersync and provide the local URL, which is probably something like localhost:8080.

Even the console tells us things are ready to go!

Let’s move this over to a GitHub repo

Git is the most commonly used version control system in software development. Most Unix-based computers come with it installed, and you can turn any directory into a Git repository by running this command:

git init

We should get a message like this:

Initialized empty Git repository in /path/to/static-site-template/.git/

That means a hidden .git folder was added inside the project directory, which allows the Git program to run commands against the project.

Before we start running a bunch of Git commands on the project, we need to tell Git about files we don’t want it to touch.

Inside the static-site-template directory, run:

touch .gitignore

Then open up that file in your favorite text editor. Add this content to the file:

_site/
node_modules/

This tells Git to ignore the node_modules directory and the _site directory. Committing every single Node.js module to the repo could make things really messy and tough to manage. All that information is already in package.json anyway.

Similarly, there’s no need to version control _site. Eleventy can generate it from the files in src, so no need to take up space in GitHub. It’s also possible that if we were to:

  • version control _site,
  • change files in src, or
  • forget to run Eleventy again,

then _site will reflect an older build of the website, and future developers (or a future version of yourself) may accidentally use an outdated version of the site.

Git is version control software, and GitHub is a Git repository host. There are other Git host providers like BitBucket or GitLab, but since we’re talking about a GitHub-specific feature (template repositories), we’ll push our work up to GitHub. If you don’t already have an account, go ahead and join GitHub. Once you have an account, create a GitHub repository and name it static-site-template.

GitHub will ask a few questions when setting up a new repository. One of those is whether we want to create a new repository on the command line or push an existing repository from the command line. Neither of these choices are exactly what we need. They assume we either don’t have anything at all, or we have been using Git locally already. The static-site-template project already exists, has a Git repository initialized, but doesn’t yet have any commits on it.

So let’s ignore the prompts and instead run the following commands in the command line. Make sure to have the URL GitHub provides in the command from line 3 handy:

git add .
git commit -m "first commit"
git remote add origin https://github.com/your-username/static-site-template.git
git push -u origin master

This adds the entire static-site-template folder to the Git staging area. It commits it with the message “first commit,” adds a remote repository (the GitHub repository), and then pushes up the master branch to that repository.

Let’s template-ize this thing

OK, this is the crux of what we have been working toward. GitHub templates allows us to use the repository we’ve just created as the foundation for other projects in the future — without having to do all the work we’ve done to get here!

Click Settings on the GitHub landing page of the repository to get started. On the settings page, check the button for Template repository.

Now when we go back to the repository page, we’ll get a big green button that says Use this template. Click it and GitHub will create a new repository that’s a mirror of our new template. The new repository will start with the same files and folders as static-site-template. From there, download or clone that new repository to start a new project with all the base files and configuration we set up in the template project.

We can extend the template for future projects

Now that we have a template repository, we can use it for any new static site project that comes up. However, You may find that a new project has additional needs than what’s been set up in the template. For example, let’s say you need to tap into Eleventy’s templating engine or data processing power.

Go ahead and build on top of the template as you work on the new project. When you finish that project, identify pieces you want to reuse in future projects. Perhaps you figured out a cool hover effect on buttons. Or you built your own JavaScript carousel element. Or maybe you’re really proud of the document design and hierarchy of information.

If you think anything you did on a project might come up again on your next run, remove the project-specific details and add the new stuff to your template project. Push those changes up to GitHub, and the next time you use static-site-template to kick off a project, your reusable code will be available to you.

There are some limitations to this, of course

GitHub template repositories are a useful tool for avoiding repetitive setup on new web development projects. I find this especially useful for static site projects. These template repositories might not be as appropriate for more complex projects that require external services like databases with configuration that cannot be version-controlled in a single directory.

Template repositories allow you to ship reusable code you have written so you can solve a problem once and use that solution over and over again. But while your new solutions will carry over to future projects, they won’t be ported backwards to old projects.

This is a useful process for sites with very similar structure, styles, and functionality. Projects with wildly varied requirements may not benefit from this code-sharing, and you could end up bloating your project with unnecessary code.

Wrapping up

There you have it! You now have everything you need to not only start a static site project using Eleventy, but the power to re-purpose it on future projects. GitHub templates are so handy for kicking off projects quickly where we otherwise would have to re-build the same wheel over and over. Use them to your advantage and enjoy a jump start on your projects moving forward!


Using GitHub Template Repos to Jump-Start Static Site Projects originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/using-github-template-repos-to-jump-start-static-site-projects/feed/ 3 296323
How to Contribute to an Open Source Project https://css-tricks.com/how-to-contribute-to-an-open-source-project/ https://css-tricks.com/how-to-contribute-to-an-open-source-project/#comments Mon, 09 Sep 2019 14:10:14 +0000 https://css-tricks.com/?p=294887 The following is going to get slightly opinionated and aims to guide someone on their journey into open source. As a prerequisite, you should have basic familiarity with the command line and Git. If you know the concepts and want …


How to Contribute to an Open Source Project originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
The following is going to get slightly opinionated and aims to guide someone on their journey into open source. As a prerequisite, you should have basic familiarity with the command line and Git. If you know the concepts and want to dive right into the step by step how-to guide, check out this part of the article.

Truly, there is no one way to contribute to an open source project, and to be involved often means more than code slinging. In this article, though, we’re going to focus on the nitty-gritty of contributing a pull request (PR) to someone else’s project on GitHub.

Let’s set the stage…

You come across someone’s project on Github and you love it! You may even decide to use it in your own project. But there’s one small thing you notice… it could be a bug. It could be an enhancement to what’s already there. Maybe you looked through the code and found a nice, optimized way of writing things that’s a little more legible, even as small as an extra indentation in the code.

Here are some initial suggestions and instructions on where to go from here.

Look for a CONTRIBUTING.md document or Contributing Guide in the documentation

Many open source projects know that other people might want to contribute. If they find themselves answering the same question again and again, this document intends to make it easier to get everyone on the same page.

Some things you might find in a Contributing guide:

  • Style guidelines
  • Prerequisites for submitting a PR
  • How to add your change to their documentation
  • A checklist for contribution
  • Explaining the project’s architecture and setup

This document can be as simple as a few notes, or it can be so robust that it takes you a little while to read through it all (like Atom’s Contributing guide, for example).

For larger projects, it makes sense to communicate contributing guidelines up front because PRs and issues can pile up and be a real drag on a maintainer’s time, sorting through contributions that might be out of the project’s scope. Make sure to take some of your own time reading through this guide if it exists because your PR will require some of the maintainer’s time as well.

Look through the existing issues and PRs

Before adding a new issue or submitting a PR, it’s good to check what else is out there. You might find someone has already asked about the same thing, or submitted something similar. You can check in the project’s search box — I usually search through issues that are both open and closed, as it’s important to know if someone already raised my concern and perhaps the maintainer decided to go in another direction. Again, it’s about saving both you and the maintainer time.

Submit an issue

Submitting issues is a core part of the PR submission process. They provide an opportunity to articulate the situation, establish context around it, and provide a forum for discussion that can be attached to the PR itself.

When submitting an issue, I like to write out what my concern is and then re-read it as if I was on the receiving end. People are human — even if what you say is technically correct, you’re not likely to get buy-in for your idea if your tone is off-putting. Consider this: you may be asking for someone to do a lot of work in their spare time. If someone asks you to do work on a Saturday, are you more likely to do so if they ask respectfully with condescension? You get the picture.

When submitting an issue, make sure you give them all the details they need to get the work done. Some things you might note:

  • If it’s a bug, then what environment you’re seeing the problem in? Is it development or production? Perhaps somewhere else?
  • If it’s a feature request, then explain the problem. Sometimes framing this from the perspective of the end user in the form of user stories can be helpful, both to conceptualize the situation and abstract it from any personal feelings.
  • If it’s a general question, then state that up front so the maintainer can avoid spending time trying to figure out if you’re asking for a bug or a feature.
  • If you’d like to submit a PR to improve on the issue, mention that you’d like to do this, then ask permission to proceed (because sometimes maintainers have other items planned you may be unaware of).

Make considerations before starting work

You’re probably eager to start working on your PR by this point. But first, there are still a few customary things to check off before you dig in.

Ask first

I’m a big fan of people asking in an issue if a PR makes sense before they work on one. I don’t hold it as a strict rule, but sometimes I can save them buckets of time and going in the wrong direction if we can clarify what we both want together first. It also helps others know to not implement the same thing (assuming they, too, look through open and closed PRs.

Use labels

If you do submit an issue and everyone agrees a PR is a good idea, then it’s nice for you (or the owner of the repo) to add the label in progress. You can search labels so it’s really clear to everyone you’re working on it.

Work in small chunks!

As a maintainer, it’s frustrating when someone put in a lot of work and submits a giant honking PR that does 10 different things. It’s really tough to review, and inevitably, they’re doing six things you want, and four things you don’t. Not only that, it’s usually spread out over multiple files which is difficult to untangle. I’ve definitely closed PRs with some high-quality code I would like just because it would take forever for to review and manage it. (I will communicate that this is the issue if they would like to resubmit the work as separate PRs, though.)

In my opinion, you have about 1,000% more chance of getting your PR merged and your time spent honored if you split things over multiple, smaller PRs. I love it when people submit a PR per topic. And it can be nice, not required, if each submission is a little spaced out as well to help with the cognitive overload.

Submit your PR

These are the steps I personally use to submit a PR. You can get this done other ways, of course, but I have found the following to be effective in my experiences. Also, anything in the commands that are written in ALL CAPS is information you will need to change for your use.

First off, go to the repo, and fork a copy of the project to your personal GitHub account. Clone it locally and change directory (cd) to where it’s located. (I use HTTPS, but SSH is totally fine as well.)

git clone https://github.com/YOUR-USERNAME/YOUR-FORKED-REPO.git
cd into/cloned/fork-repo

Next up, add a remote upstream to the original branch. This means it will share a connection with that original branch so that you can keep in sync and pull down any updates to the project when they happen.

git remote add upstream https://github.com/ORIGINAL-DEV-USERNAME/REPO-YOU-FORKED-FROM.git
git fetch upstream

Now you can create a new branch and give it a name that relates to the PR topic. Note that a maintainer might have a specific naming convention in mind that is documented in the repo.

git checkout -b GOOD-FORKIN-NAME

Go forth and work on your changes. Be sure to make good commit messages along the way:

git add -A
git commit -m “ADDING IN A TACO DISPENSER”
git push origin GOOD-FORKIN-NAME

GitHub will see the new fork and prompt you to make the PR, which is a nice, helpful touch. You click the button and fill in details: what issue does it close? you can refer to them by their number and GitHub will automatically associate it:

On the PR:

Shows a referenced issue on a <abbr data-recalc-dims=PR” />

In the Issue:

Shows what the reference looks like in the issue

What are some of the things to note in the PR? These details help the maintainer understand context. These can be all the changes you made, they can be larger strategy or context.

And you’re on your way! 🎉

You may find you need to keep your fork up-to-date with the remote, and pull their changes into yours. To do so, you would run this command:

git pull upstream master

Props to Christina Solana for her Gist which I’ve used as a reference for years and years now.

Always remember: maintainers are often swamped, sacrificing nights and weekends to keep open source projects active and updated. Being respectful, both in terms of their time, and in tone, can set you up for success in contributing.


Open source can be extremely rewarding! Knowing other people are benefitting and directly using something you contributed can be a great way to give back to the community and learn.


How to Contribute to an Open Source Project originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/how-to-contribute-to-an-open-source-project/feed/ 4 294887
Introducing GitHub Actions https://css-tricks.com/introducing-github-actions/ https://css-tricks.com/introducing-github-actions/#comments Wed, 17 Oct 2018 17:26:22 +0000 http://css-tricks.com/?p=277728 It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests …


Introducing GitHub Actions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you’re not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment… all in one place.

Previously, there were only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

But now, enter GitHub Actions.

Actions are small bits of code that can be run off of various GitHub events, the most common of which is pushing to master. But it’s not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit… you name it) just to name a couple.

But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more… the sky’s the limit.

You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

Setting up your first action

There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

A screenshot of the GitHub Actions beta site showing a large blue button to click to join the beta.
The GitHub Actions beta site.

Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

A screenshot of the sample repo showing the Actions tab in the menu.

If I click on the Actions tab, this screen shows:

screen that shows

I click “Create a New Workflow,” and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

new workflow

Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

show all of the action options in the sidebar

There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it’s capable of so much more!

I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we’ll walk through together.

shows options for azure in the sidebar

At the top where you see the heading “GitHub Action for Azure,” there’s a “View source” link. That will take you directly to the repo that’s used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the “uses” option in the Actions panel.

Here’s a rundown of the options we’re provided:

  • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what’s creating the connection between them. This piece is abstracted away for you in the GUI, but you’ll see in the next section that, if you’re working in code, you’ll need to keep the references the same to have the chaining work.
  • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
  • Args: This is what you’d expect — it allows you to pass arguments to the container.
  • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

Many of these actions have readmes that tell you what you need. The setup for “secrets” and “env” usually looks something like this:

action "deploy" {
  uses = ...
  secrets = [
    "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET",
  ]
}

You can also string multiple actions together in this GUI. It’s very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

Writing an action in code

So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo’s master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

Create the app services account

If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

First you’ll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

brew update && brew install azure-cli

Then, we’ll log in to Azure by running:

az login

Now, we’ll create a Service Principle by running:

az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

It will pass us this bit of output, that we’ll use in creating our action:

{
  "appId": "APP_ID",
  "displayName": "ServicePrincipalName",
  "name": "http://ServicePrincipalName",
  "password": ...,
  "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}

What’s in an action?

Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

workflow "Name of Workflow" {
  on = "push"
  resolves = ["deploy"]
}

action "deploy" {
  uses = "actions/someaction"
  secrets = [
    "TOKEN",
  ]
}

We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn’t specify the order, but rather, is a full list of everything. You can see that we called the action following “deploy” — these strings need to match, that’s how they are referencing one another.

Next, we’ll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here’s a list of all of them). But you can also use another person’s repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

We may need some secrets or environment variables defined here and we would use them like this:

action "Deploy Webapp" {
  uses = ...
  args = "run some code here and use a $ENV_VARIABLE_NAME"
  secrets = ["SECRET_NAME"]
  env = {
    ENV_VARIABLE_NAME = "myEnvVariable"
  }
}

Creating a custom action

What we’re going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we’re taking that first base Azure action you saw in the GUI and building on top of it.

In entrypoint.sh:

#!/bin/sh

set -e

echo "Login"
az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}"

echo "Creating resource group ${APPID}-group"
az group create -n ${APPID}-group -l westcentralus

echo "Creating app service plan ${APPID}-plan"
az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE

echo "Creating webapp ${APPID}"
az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git

echo "Getting username/password for deployment"
DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv`
DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv`

git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}.scm.azurewebsites.net/${APPID}.git

git push azure master

A couple of interesting things to note about this file:

  • set -e in a shell script will make sure that if anything blows up the rest of the file doesn’t keep evaluating.
  • The lines following “Getting username/password” look a little tricky — really what they’re doing is extracting the username and password from Azure’s publishing profiles. We can then use it for the following line of code where we add the remote.
  • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

Now we can work on our main.workflow file!

workflow "New workflow" {
  on = "push"
  resolves = ["Deploy to Azure"]
}

action "Deploy to Azure" {
  uses = "./.github/azdeploy"
  secrets = ["SERVICE_PASS"]
  env = {
    SERVICE_PRINCIPAL="http://sdrasApp",
    TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47",
    APPID="sdrasMoonshine"
  }
}

The workflow piece should look familiar to you — it’s kicking off on push and resolves to the action, called “Deploy to Azure.”

uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we’ll configure this by going here and adding it, in settings:

adding a secret in settings

Finally, we have all of the environment variables we’ll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you’d like to name it :)

You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn’t work the same way anymore.

Here you can see everything deploying nicely, turning green, and we have our wonderful “Hello World” app that redeploys whenever we push to master 🎉

successful workflow showing green
Hello Work app screenshot

Game changing

GitHub actions aren’t only about websites, though you can see how handy they are for them. It’s a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that’s hosted on Docker directly.

You also don’t need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

All in all, it’s pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you’re hosting your code.


Introducing GitHub Actions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/introducing-github-actions/feed/ 10 277728
Creating a Static API from a Repository https://css-tricks.com/creating-static-api-repository/ https://css-tricks.com/creating-static-api-repository/#comments Thu, 21 Sep 2017 14:28:36 +0000 http://css-tricks.com/?p=260061 When I first started building websites, the proposition was quite basic: take content, which may or may not be stored in some form of database, and deliver it to people’s browsers as HTML pages. Over the years, countless products used …


Creating a Static API from a Repository originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
When I first started building websites, the proposition was quite basic: take content, which may or may not be stored in some form of database, and deliver it to people’s browsers as HTML pages. Over the years, countless products used that simple model to offer all-in-one solutions for content management and delivery on the web.

Fast-forward a decade or so and developers are presented with a very different reality. With such a vast landscape of devices consuming digital content, it’s now imperative to consider how content can be delivered not only to web browsers, but also to native mobile applications, IoT devices, and other mediums yet to come.

Even within the realms of the web browser, things have also changed: client-side applications are becoming more and more ubiquitous, with challenges to content delivery that didn’t exist in traditional server-rendered pages.

The answer to these challenges almost invariably involves creating an API — a way of exposing data in such a way that it can be requested and manipulated by virtually any type of system, regardless of its underlying technology stack. Content represented in a universal format like JSON is fairly easy to pass around, from a mobile app to a server, from the server to a client-side application and pretty much anything else.

Embracing this API paradigm comes with its own set of challenges. Designing, building and deploying an API is not exactly straightforward, and can actually be a daunting task to less experienced developers or to front-enders that simply want to learn how to consume an API from their React/Angular/Vue/Etc applications without getting their hands dirty with database engines, authentication or data backups.

Back to Basics

I love the simplicity of static sites and I particularly like this new era of static site generators. The idea of a website using a group of flat files as a data store is also very appealing to me, which using something like GitHub means the possibility of having a data set available as a public repository on a platform that allows anyone to easily contribute, with pull requests and issues being excellent tools for moderation and discussion.

Imagine having a site where people find a typo in an article and submit a pull request with the correction, or accepting submissions for new content with an open forum for discussion, where the community itself can filter and validate what ultimately gets published. To me, this is quite powerful.

I started toying with the idea of applying these principles to the process of building an API instead of a website — if programs like Jekyll or Hugo take a bunch of flat files and create HTML pages from them, could we build something to turn them into an API instead?

Static Data Stores

Let me show you two examples that I came across recently of GitHub repositories used as data stores, along with some thoughts on how they’re structured.

The first example is the ESLint website, where every single ESLint rule is listed along with its options and associated examples of correct and incorrect code. Information for each rule is stored in a Markdown file annotated with a YAML front matter section. Storing the content in this human-friendly format makes it easy for people to author and maintain, but not very simple for other applications to consume programmatically.

The second example of a static data store is MDN’s browser-compat-data, a compendium of browser compatibility information for CSS, JavaScript and other technologies. Data is stored as JSON files, which conversely to the ESLint case, are a breeze to consume programmatically but a pain for people to edit, as JSON is very strict and human errors can easily lead to malformed files.

There are also some limitations stemming from the way data is grouped together. ESLint has a file per rule, so there’s no way to, say, get a list of all the rules specific to ES6, unless they chuck them all into the same file, which would be highly impractical. The same applies to the structure used by MDN.

A static site generator solves these two problems for normal websites — they take human-friendly files, like Markdown, and transform them into something tailored for other systems to consume, typically HTML. They also provide ways, through their template engines, to take the original files and group their rendered output in any way imaginable.

Similarly, the same concept applied to APIs — a static API generator? — would need to do the same, allowing developers to keep data in smaller files, using a format they’re comfortable with for an easy editing process, and then process them in such a way that multiple endpoints with various levels of granularity can be created, transformed into a format like JSON.

Building a Static API Generator

Imagine an API with information about movies. Each title should have information about the runtime, budget, revenue, and popularity, and entries should be grouped by language, genre, and release year.

To represent this dataset as flat files, we could store each movie and its attributes as a text, using YAML or any other data serialization language.

budget: 170000000
website: http://marvel.com/guardians
tmdbID: 118340
imdbID: tt2015381
popularity: 50.578093
revenue: 773328629
runtime: 121
tagline: All heroes start somewhere.
title: Guardians of the Galaxy

To group movies, we can store the files within language, genre and release year sub-directories, as shown below.

input/
├── english
│   ├── action
│   │   ├── 2014
│   │   │   └── guardians-of-the-galaxy.yaml
│   │   ├── 2015
│   │   │   ├── jurassic-world.yaml
│   │   │   └── mad-max-fury-road.yaml
│   │   ├── 2016
│   │   │   ├── deadpool.yaml
│   │   │   └── the-great-wall.yaml
│   │   └── 2017
│   │       ├── ghost-in-the-shell.yaml
│   │       ├── guardians-of-the-galaxy-vol-2.yaml
│   │       ├── king-arthur-legend-of-the-sword.yaml
│   │       ├── logan.yaml
│   │       └── the-fate-of-the-furious.yaml
│   └── horror
│       ├── 2016
│       │   └── split.yaml
│       └── 2017
│           ├── alien-covenant.yaml
│           └── get-out.yaml
└── portuguese
    └── action
        └── 2016
            └── tropa-de-elite.yaml

Without writing a line of code, we can get something that is kind of an API (although not a very useful one) by simply serving the `input/` directory above using a web server. To get information about a movie, say, Guardians of the Galaxy, consumers would hit:

http://localhost/english/action/2014/guardians-of-the-galaxy.yaml

and get the contents of the YAML file.

Using this very crude concept as a starting point, we can build a tool — a static API generator — to process the data files in such a way that their output resembles the behavior and functionality of a typical API layer.

Format translation

The first issue with the solution above is that the format chosen to author the data files might not necessarily be the best format for the output. A human-friendly serialization format like YAML or TOML should make the authoring process easier and less error-prone, but the API consumers will probably expect something like XML or JSON.

Our static API generator can easily solve this by visiting each data file and transforming its contents to JSON, saving the result to a new file with the exact same path as the source, except for the parent directory (e.g. `output/` instead of `input/`), leaving the original untouched.

This results on a 1-to-1 mapping between source and output files. If we now served the `output/` directory, consumers could get data for Guardians of the Galaxy in JSON by hitting:

http://localhost/english/action/2014/guardians-of-the-galaxy.json

whilst still allowing editors to author files using YAML or other.

{
  "budget": 170000000,
  "website": "http://marvel.com/guardians",
  "tmdbID": 118340,
  "imdbID": "tt2015381",
  "popularity": 50.578093,
  "revenue": 773328629,
  "runtime": 121,
  "tagline": "All heroes start somewhere.",
  "title": "Guardians of the Galaxy"
}

Aggregating data

With consumers now able to consume entries in the best-suited format, let’s look at creating endpoints where data from multiple entries are grouped together. For example, imagine an endpoint that lists all movies in a particular language and of a given genre.

The static API generator can generate this by visiting all subdirectories on the level being used to aggregate entries, and recursively saving their sub-trees to files placed at the root of said subdirectories. This would generate endpoints like:

http://localhost/english/action.json

which would allow consumers to list all action movies in English, or

http://localhost/english.json

to get all English movies.

{  
   "results": [  
      {  
         "budget": 150000000,
         "website": "http://www.thegreatwallmovie.com/",
         "tmdbID": 311324,
         "imdbID": "tt2034800",
         "popularity": 21.429666,
         "revenue": 330642775,
         "runtime": 103,
         "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?",
         "title": "The Great Wall"
      },
      {  
         "budget": 58000000,
         "website": "http://www.foxmovies.com/movies/deadpool",
         "tmdbID": 293660,
         "imdbID": "tt1431045",
         "popularity": 23.993667,
         "revenue": 783112979,
         "runtime": 108,
         "tagline": "Witness the beginning of a happy ending",
         "title": "Deadpool"
      }
   ]
}

To make things more interesting, we can also make it capable of generating an endpoint that aggregates entries from multiple diverging paths, like all movies released in a particular year. At first, it may seem like just another variation of the examples shown above, but it’s not. The files corresponding to the movies released in any given year may be located at an indeterminate number of directories — for example, the movies from 2016 are located at `input/english/action/2016`, `input/english/horror/2016` and `input/portuguese/action/2016`.

We can make this possible by creating a snapshot of the data tree and manipulating it as necessary, changing the root of the tree depending on the aggregator level chosen, allowing us to have endpoints like http://localhost/2016.json.

Pagination

Just like with traditional APIs, it’s important to have some control over the number of entries added to an endpoint — as our movie data grows, an endpoint listing all English movies would probably have thousands of entries, making the payload extremely large and consequently slow and expensive to transmit.

To fix that, we can define the maximum number of entries an endpoint can have, and every time the static API generator is about to write entries to a file, it divides them into batches and saves them to multiple files. If there are too many action movies in English to fit in:

http://localhost/english/action.json

we’d have

http://localhost/english/action-2.json

and so on.

For easier navigation, we can add a metadata block informing consumers of the total number of entries and pages, as well as the URL of the previous and next pages when applicable.

{  
   "results": [  
      {  
         "budget": 150000000,
         "website": "http://www.thegreatwallmovie.com/",
         "tmdbID": 311324,
         "imdbID": "tt2034800",
         "popularity": 21.429666,
         "revenue": 330642775,
         "runtime": 103,
         "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?",
         "title": "The Great Wall"
      },
      {  
         "budget": 58000000,
         "website": "http://www.foxmovies.com/movies/deadpool",
         "tmdbID": 293660,
         "imdbID": "tt1431045",
         "popularity": 23.993667,
         "revenue": 783112979,
         "runtime": 108,
         "tagline": "Witness the beginning of a happy ending",
         "title": "Deadpool"
      }
   ],
   "metadata": {  
      "itemsPerPage": 2,
      "pages": 3,
      "totalItems": 6,
      "nextPage": "/english/action-3.json",
      "previousPage": "/english/action.json"
   }
}

Sorting

It’s useful to be able to sort entries by any of their properties, like sorting movies by popularity in descending order. This is a trivial operation that takes place at the point of aggregating entries.

Putting it all together

Having done all the specification, it was time to build the actual static API generator app. I decided to use Node.js and to publish it as an npm module so that anyone can take their data and get an API off the ground effortlessly. I called the module static-api-generator (original, right?).

To get started, create a new folder and place your data structure in a sub-directory (e.g. `input/` from earlier). Then initialize a blank project and install the dependencies.

npm init -y
npm install static-api-generator --save

The next step is to load the generator module and create an API. Start a blank file called `server.js` and add the following.

const API = require('static-api-generator')
const moviesApi = new API({
  blueprint: 'source/:language/:genre/:year/:movie',
  outputPath: 'output'
})

In the example above we start by defining the API blueprint, which is essentially naming the various levels so that the generator knows whether a directory represents a language or a genre just by looking at its depth. We also specify the directory where the generated files will be written to.

Next, we can start creating endpoints. For something basic, we can generate an endpoint for each movie. The following will give us endpoints like /english/action/2016/deadpool.json.

moviesApi.generate({
  endpoints: ['movie']
})

We can aggregate data at any level. For example, we can generate additional endpoints for genres, like /english/action.json.

moviesApi.generate({
  endpoints: ['genre', 'movie']
})

To aggregate entries from multiple diverging paths of the same parent, like all action movies regardless of their language, we can specify a new root for the data tree. This will give us endpoints like /action.json.

moviesApi.generate({
  endpoints: ['genre', 'movie'],
  root: 'genre'
})

By default, an endpoint for a given level will include information about all its sub-levels — for example, an endpoint for a genre will include information about languages, years and movies. But we can change that behavior and specify which levels to include and which ones to bypass.

The following will generate endpoints for genres with information about languages and movies, bypassing years altogether.

moviesApi.generate({
  endpoints: ['genre'],
  levels: ['language', 'movie'],
  root: 'genre'
})

Finally, type npm start to generate the API and watch the files being written to the output directory. Your new API is ready to serve – enjoy!

Deployment

At this point, this API consists of a bunch of flat files on a local disk. How do we get it live? And how do we make the generation process described above part of the content management flow? Surely we can’t ask editors to manually run this tool every time they want to make a change to the dataset.

GitHub Pages + Travis CI

If you’re using a GitHub repository to host the data files, then GitHub Pages is a perfect contender to serve them. It works by taking all the files committed to a certain branch and making them accessible on a public URL, so if you take the API generated above and push the files to a gh-pages branch, you can access your API on http://YOUR-USERNAME.github.io/english/action/2016/deadpool.json.

We can automate the process with a CI tool, like Travis. It can listen for changes on the branch where the source files will be kept (e.g. master), run the generator script and push the new set of files to gh-pages. This means that the API will automatically pick up any change to the dataset within a matter of seconds – not bad for a static API!

After signing up to Travis and connecting the repository, go to the Settings panel and scroll down to Environment Variables. Create a new variable called GITHUB_TOKEN and insert a GitHub Personal Access Token with write access to the repository – don’t worry, the token will be safe.

Finally, create a file named `.travis.yml` on the root of the repository with the following.

language: node_js

node_js:
  - "7"

script: npm start

deploy:
  provider: pages
  skip_cleanup: true
  github_token: $GITHUB_TOKEN
  on:
    branch: master
  local_dir: "output"

And that’s it. To see if it works, commit a new file to the master branch and watch Travis build and publish your API. Ah, GitHub Pages has full support for CORS, so consuming the API from a front-end application using Ajax requests will work like a breeze.

You can check out the demo repository for my Movies API and see some of the endpoints in action:

Going full circle with Staticman

Perhaps the most blatant consequence of using a static API is that it’s inherently read-only – we can’t simply set up a POST endpoint to accept data for new movies if there’s no logic on the server to process it. If this is a strong requirement for your API, that’s a sign that a static approach probably isn’t the best choice for your project, much in the same way that choosing Jekyll or Hugo for a site with high levels of user-generated content is probably not ideal.

But if you just need some basic form of accepting user data, or you’re feeling wild and want to go full throttle on this static API adventure, there’s something for you. Last year, I created a project called Staticman, which tries to solve the exact problem of adding user-generated content to static sites.

It consists of a server that receives POST requests, submitted from a plain form or sent as a JSON payload via Ajax, and pushes data as flat files to a GitHub repository. For every submission, a pull request will be created for your approval (or the files will be committed directly if you disable moderation).

You can configure the fields it accepts, add validation, spam protection and also choose the format of the generated files, like JSON or YAML.

This is perfect for our static API setup, as it allows us to create a user-facing form or a basic CMS interface where new genres or movies can be added. When a form is submitted with a new entry, we’ll have:

  • Staticman receives the data, writes it to a file and creates a pull request
  • As the pull request is merged, the branch with the source files (master) will be updated
  • Travis detects the update and triggers a new build of the API
  • The updated files will be pushed to the public branch (gh-pages)
  • The live API now reflects the submitted entry.

Parting thoughts

To be clear, this article does not attempt to revolutionize the way production APIs are built. More than anything, it takes the existing and ever-popular concept of statically-generated sites and translates them to the context of APIs, hopefully keeping the simplicity and robustness associated with the paradigm.

In times where APIs are such fundamental pieces of any modern digital product, I’m hoping this tool can democratize the process of designing, building and deploying them, and eliminate the entry barrier for less experienced developers.

The concept could be extended even further, introducing concepts like custom generated fields, which are automatically populated by the generator based on user-defined logic that takes into account not only the entry being created, but also the dataset as a whole – for example, imagine a rank field for movies where a numeric value is computed by comparing the popularity value of an entry against the global average.

If you decide to use this approach and have any feedback/issues to report, or even better, if you actually build something with it, I’d love to hear from you!

References


Creating a Static API from a Repository originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/creating-static-api-repository/feed/ 1 260061
Switching Your Site to HTTPS on a Shoestring Budget https://css-tricks.com/switching-site-https-shoestring-budget/ https://css-tricks.com/switching-site-https-shoestring-budget/#comments Mon, 04 Sep 2017 14:17:11 +0000 http://css-tricks.com/?p=259594 Google’s Search Console team recently sent out an email to site owners with a warning that Google Chrome will take steps starting this October to identify and show warnings on non-secure sites that have form inputs.

Here’s the notice that …


Switching Your Site to HTTPS on a Shoestring Budget originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Google’s Search Console team recently sent out an email to site owners with a warning that Google Chrome will take steps starting this October to identify and show warnings on non-secure sites that have form inputs.

Here’s the notice that landed in my inbox:

The notice from the Google Search Console team regarding HTTPS support

If your site URL does not support HTTPS, then this notice directly affects you. Even if your site does not have forms, moving over to HTTPS should be a priority, as this is only one step in Google’s strategy to identify insecure sites. They state this clearly in their message:

The new warning is part of a long term plan to mark all pages served over HTTP as “not secure”.

Current Chrome’s UI for a site with HTTP support and a site with HTTPS

The problem is that the process of installing SSL certificates and transitioning site URLs from HTTP to HTTPS—not to mention editing all those links and linked images in existing content—sounds like a daunting task. Who has time and wants to spend the money to update a personal website for this?

I use GitHub Pages to host a number sites and projects for free—including some that use custom domain names. To that end, I wanted to see if I could quickly and inexpensively convert a site from HTTP to HTTPS. I wound up finding a relatively simple solution on a shoestring budget that I hope will help others. Let’s dig into that.

Enforcing HTTPS on GitHub Pages

Sites hosted on GitHub Pages have a simple setting to enable HTTPS. Navigate to the project’s Settings and flip the switch to enforce HTTPS.

The GitHub Pages setting to enforce HTTPS on a project

But We Still Need SSL

Sure, that first step was a breeze, but it’s not the full picture of what we need to do to meet Google’s definition of a secure site. The reason is that enabling the HTTPS setting neither provides nor installs a Secure Sockets Layer (SSL) certificate to a site that uses a custom domain. Sites that use the default web address provided by GitHub Pages are fully secure with that setting, but those of us that use a custom domain have to go the extra step of securing SSL at the domain level.

That’s a bummer because SSL, while not super expensive, is yet another cost and likely one you may not want to incur when you’re trying to keep costs down. I wanted to find a way around this.

We Can Get SSL From a CDN … for Free!

This is where Cloudflare comes in. Cloudflare is a Content Delivery Network (CDN) that also provides distributed domain name server services. What that means is that we can leverage their network to set up HTTPS. The real kicker is that they have a free plan that makes this all possible.

It’s worth noting that there are a number of good posts here on CSS-Tricks that tout the benefits of a CDN. While we’re focused on the security perks in this post, CDNs are an excellent way to help reduce server burden and increase performance.

From here on out, I’m going to walk through the steps I used to connect Cloudflare to GitHub Pages so, if you haven’t already, you can snag a free account and follow along.

Step 1: Select the “+ Add Site” option

First off, we have to tell Cloudflare that our domain exists. Cloudflare will scan the DNS records to verify both that the domain exists and that the public information about the domain are accessible.

Cloudflare’s “Add Website” Setting

Step 2: Review the DNS records

After Cloudflare has scanned the DNS records, it will spit them out and display them for your review. Cloudflare indicates that it believes things are in good standing with an orange cloud in the Status column. Review the report and confirm that the records match those from your registrar. If all is good, click “Continue” to proceed.

The DNS record report in Cloudflare

Step 3: Get the Free Plan

Cloudflare will ask what level of service you want to use. Lo and behold! There is a free option that we can select.

Cloudflare’s free plan option

Step 4: Update the Nameservers

At this point, Cloudflare provides us with its server addresses and our job is to head over to the registrar where the domain was purchased and paste those addresses into the DNS settings.

Cloudflare provides the nameservers for updated the registrar settings.

It’s not incredibly difficult to do this, but can be a little unnerving. Your registrar likely has instructions for how to do this. For example, here are GoDaddy’s instructions for updating nameservers for domains registered through their service.

Once you have done this step, your domain will effectively be mapped to Cloudflare’s servers, which will act as an intermediary between the domain and GitHub Pages. However, it is a bit of a waiting game and can take Cloudflare up to 24 hours to process the request.

If you are using GitHub Pages with a subdomain instead of a custom domain, there is one extra step you are required to do. Head over to your GitHub Pages settings and add a CNAME record in the DNS settings. Set it to point to <your-username>.github.io, where <your-username> is, of course, your GitHub account handle. Oh, and you will need to add a CNAME text file to the root of your GitHub project which is literally a text file named CNAME with your domain name in it.

Here is a screenshot with an example of adding a GitHub Pages subdomain as a CNAME record in Cloudflare’s settings:

Adding a GitHub Pages subdomain to Cloudflare

Step 5: Enable HTTPS in Cloudflare

Sure, we’ve technically already done this in GitHub Pages, but we’re required to do it in Cloudflare as well. Cloudflare calls this feature “Crypto” and it not only forces HTTPS, but provides the SSL certificate we’ve been wanting all along. But we’ll get to that in just a bit. For now, enable Crypto for HTTPS.

The Crypto option in Cloudflare’s main menu

Turn on the “Always use HTTPS” option:

Enable HTTPS in the Cloudflare settings

Now any HTTP request from a browser is switched over to the more secure HTTPS. We’re another step closer to making Google Chrome happy.

Step 6: Make Use of the CDN

Hey, we’re using a CDN to get SSL, so we may as well take advantage of its performance benefits while we’re at it. We can speed up performance by reducing files automatically and extend browser cache expiration.

Select the “Speed” option in the settings and allow Cloudflare to auto minify our site’s web assets:

Allow Cloudflare to minify the site’s web assets

We can also set the expiration on browser cache to maximize performance:

Set the browser cache in Cloudflare’s Speed settings

By moving the expiration out date a longer than the default option, the browser will refrain itself from asking for a site’s resources with each and every visit—that is, resources that more than likely haven’t been changed or updated. This will save visitors an extra download on repeat visits within a month’s time.

Step 7: Make External Resource Secure

If you use external resources on your site (and many of us do), then those need to be served securely as well. For example, if you use a Javascript framework and it is not served from an HTTP source, that blows our secure cover as far as Google Chrome is concerned and we need to patch that up.

If the external resource you use does not provide HTTPS as a source, then you might want to consider hosting it yourself. We have a CDN now that makes the burden of serving it a non-issue.

Step 8: Activate SSL

Woot, here we are! SSL has been the missing link between our custom domain and GitHub Pages since we enabled HTTPS in the GitHub Pages setting and this is where we have the ability to activate a free SSL certificate on our site, courtesy of Cloudflare.

From the Crypto settings in Cloudflare, let’s first make sure that the SSL certificate is active:

Cloudflare shows an active SSL certificate in the Crypto settings

If the certificate is active, move to “Page Rules” in the main menu and select the “Create Page Rule” option:

Create a page rule in the Cloudflare settings

…then click “Add a Setting” and select the “Always use HTTPS” option:

Force HTTPS on that entire domain! Note the asterisks in the formatting, which is crucial.

After that click “Save and Deploy” and celebrate! We now have a fully secure site in the eyes of Google Chrome and didn’t have to touch a whole lot of code or drop a chunk of change to do it.

In Conclusion

Google’s push for HTTPS means front-end developers need to prioritize SSL support more than ever, whether it’s for our own sites, company sites, or client sites. This move gives us one more incentive to make the move and the fact that we can pick up free SSL and performance enhancements through the use of a CDN makes it all the more worthwhile.

Have you written about your adventures moving to HTTPS? Let me know in the comments and we can compare notes. Meanwhile, enjoy a secure and speedy site!


Switching Your Site to HTTPS on a Shoestring Budget originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/switching-site-https-shoestring-budget/feed/ 30 259594