All the coding has been completed and it’s (almost) perfect. So, now what?
- How do we make sure it stays (almost) perfect?
- Is linting ok?
- Are all tests passing?
- Do we have all builds (versions) archives?
- And… all above for all our 19 (nineteen, in words) GitHub repositories!
Let’s take some action on it.
First, we need to set up CI (Continuous Integration) and some rules for the repository’s main (master) branch. Using GitHub is really “easy.” We created a branch rule where direct commits are not allowed, and pull requests are only possible to merge if all the steps of our CI are passed. These steps are simple - code linting and tests passing. Moreover, the test must pass various versions of PHP so that we are sure our plugins support all of them! Oh yes - important to remember - include administrators! Same rules for all!
When we ensure code is (almost) perfect, we need to build our plugins and prepare them for distribution. Here comes action 2 in place, which is to build a process!
The build process is triggered using GitHub’s releases feature, which is the only way to activate it. Here, we decided to have 2 types of builds (releases): prerelease and release. Why is this so?
Simply put, we have an assurance policy that must be confirmed by a human! Yes, by a human 😃
Speaking of the build process, these steps are very “simple” - build assets (js, CSS…), grab some files from the codebase, zip to the file, and most importantly, attach it as an asset to (GitHub’s) release so that we can keep track of every version we release. If we are speaking about prerelease, we send links to our Quality Assurance humans from where they can download build and make sure our (almost) perfect code - WORKS perfectly - yes, there is no “almost” here ;)
After our QA policy is confirmed, we are ready to build, release, and ship products to our customers.
Once the release is published, we now have come to our 3rd action - deploy extension build asset to our website from where our customers can download it.
This step is triggered automatically after the release is published within the GitHub’s releases page. This step is triggered automatically after the release is published within GitHub’s releases page. This process includes retrieving a release asset from the release using the GitHub API and sending it to the deploy page on our website - here we also perform some checks, such as not allowing publishing versions older than the current (active) one because our customers cannot “downgrade” the extensions.
Also, the deployment process can be triggered manually if anything goes wrong - yes, it can go wrong - in the end, we are all humans, and we make mistakes.
Challenges and problems we faced
Testing
Is there a way to test GitHub Actions locally?! YES - and no. Why so? Simply, it depends on the features you use. But in general, most of it is possible to test locally.
“Think globally, act locally” is the motto of Act library (https://github.com/nektos/act) that allows you to test your GitHub actions locally. Documentation is good enough so that you can start using with very fast!
For users using Apple’s M1 chips - you should use the following flag when running commands
--container-architecture linux/amd64
Reusable workflows
This is a great feature! “Reusable workflows help you reduce duplication by enabling you to reuse an entire workflow as if it were an action,” as is stated on GitHub’s site!
Here is an example of Deploy Production workflow that we (re)use in other workflows. This allows us to trigger production deployment manual from GitHub UI (workflow_dispatch
) or to call it from another workflow (workflow_call
).
name: Deploy Production
on:
workflow_dispatch:
inputs:
reason:
description: What are you releasing?
required: true
workflow_call:
secrets:
SECRET_1:
required: true
SERET_2:
required: true
. . .
What’s important to know here is that you must pass any secrets you need to a reusable workflow. Being said, in example above, when we run this workflow as a “standalone” (trigger it manually from GitHub UI) - it will pick up SECRET_1
and SECRET_2
from our Action secrets in GitHub.
Calling (triggering) this workflow from another one, as written above, we explicitly must provide these secrets to it as shown below. The names must be the same as those in the reusable workflow definition (and actions secrets).
deploy-production:
name: Deploy
if: ${{ github.event.action == 'released' }}
needs: build-release
uses: ./.github/workflows/deploy-production.yml
secrets:
SECRET_1: ${{ secrets.SECRET_1 }}
SECRET_2: ${{ secrets.SECRET_2 }}
You can find out more about reusable workflows here: https://docs.github.com/en/actions/using-workflows/reusing-workflows
Preventing code duplication
Yes, that’s also possible! You can have (define) your “local” actions - a small (or big) partition of code that can be reused (called from anywhere in the workflows).
Here is an example: Action is defined in its own .yml file (named action.yml)
name: Build
description: Build files for deployment
outputs:
ASSET_PATH:
description: Asset path
value: ${{ steps.build-sh.outputs.ASSET_PATH }}
ASSET_NAME:
description: Asset file name
value: ${{ steps.build-sh.outputs.ASSET_FILE_NAME }}
runs:
using: composite
steps:
- id: build-sh
run: ${{ github.action_path }}/build.sh
shell: bash
It is simple to call that action after, as shown below:
- name: Build
id: build
uses: ./.github/actions/build
However, as shown here, its output can also be used:
- name: Upload release asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ github.event.release.upload_url }}
asset_path: ${{ steps.build.outputs.ASSET_PATH }}
asset_name: ${{ steps.build.outputs.ASSET_FILE_NAME }}
asset_content_type: application/zip
This action is called whenever we need to create (assets) for deployment/testing. Once written, it can be used as many times as we need it!
Conclusion
Given we were migrating from Jenkins to GitHub actions (whoops, did I say it?!), I would like to mention one BIG advantage I noticed.
For any custom action that needs to be performed in a workflow, you can use (write code in) whatever language you want - bash, PHP, node… It’s entirely up to you.
That said, we could reuse almost all our code running on Jenkins with (almost) no changes at all.
So, our migration was not a hard task (but it was a bit time-consuming), and I encourage you to give it a try.
It is worth it ;)