Rethinking deploys at Flare
Flare has long relied on locally running Laravel Envoy for our app deployments. However, as the application and infrastructure evolves, so do the challenges. This article documents some of our endeavors to come up with a more robust solution in 2024 while keeping our deploys as simple as possible.
Our current deployment process
At Flare, we've been long time fans of Laravel Envoy. This task runner allows you to use Blade syntax for running bash scripts locally or on your servers over SSH. This makes it a great tool for deploying your application. For Flare, we've used Envoy for years to perform atomic deployments, asynchronously SSH into our production servers, install and build dependencies and finally perform an atomic deployment. This strategy is based on a blog post on Servers for Hackers from back in 2015.
A couple drawbacks
Even though we've tweaked and changed a lot in the past 9 years, this approach has a couple fundamental drawbacks:
- because the deploy runs separately on every server, it's as slow as the slowest server
- while usually not a problem, the results of composer install or the front-end build might end up being different on every server (for example due to different OS or cached dependencies)
- disk space on every server needs to be monitored to avoid having a bunch of old releases taking up too much space
- credentials for private NPM registries and Composer repositories need to be available on every server
- every developer needs SSH access to every server
- Envoy requires quite some additional "boilerplate" code to set-up zero-downtime, atomic deployments
Some of these issues have been (partially) fixed by building the front-end assets locally and using Airdrop to distribute build to our servers, but ultimately we're looking for a faster, more resilient, more 2024-ready solution.
Some requirements and ideas
We've explored modern deployment solutions such as serverless and containerization in the past. However, for Flare's needs we prefer a more traditional deploy script that is transparent, simple to understand and easy to run. However, we will re-evaluate some key parts of our deploy script to address some of its aforementioned downsides.
Use Deployer with recipes instead of Envoy
Envoy is great for small websites with one or two servers. It has support for running tasks on multiple server, but deploying Flare from multiple branches to multiple servers with multiple environments (staging/production) required excessive boilerplate code.
Instead, we've started looking at Deployer. It's been around for a long time and provides similar features to Laravel Envoy. Additionally, it includes recipes for zero-downtime and atomic Laravel deployments out of the box. This eliminates the need for much of the boilerplate code in our current deployment script.
Deploy on GitHub Actions instead of locally
We've been running deployments from our local machines for years. Deploying straight from our local terminals has been pretty useful to keep the deploy process visible and the feedback loop short. While this worked great with a short deploy script and one or two developers, this is now becoming an issue. The biggest downside is that every developer needs to have access to all resources required to perform a deploy (e.g. SSH access, .env
files, access to S3 for builds, etc).
Instead, we'll be switching to GitHub Actions as a CI/CD service. This way there's a single service that's responsible for accessing secrets and production services. This also enables us to further automate our release process.
Centralised build and distribute using rsync
One of the biggest annoyances in our current approach is that the deploy/build code is executed separately on every server. This means that if we have 5 servers, we'll run tasks like composer install
and yarn run production
once on every server. This doesn't just waste CPU cycles but also puts unnecessary pressure on production servers. The go-to solution would be to run build front-end assets on a separate CI/CD build server and copy the build to each server using something like Airdrop. However, we decided to take it one step further by preparing and building theapplication in a GitHub Actions runtime and using rsync to copy the entire application directory to every server, not just the required build files.
Use Airdrop to only build while necessary
We're already using Airdrop in our current deploy script and we love it. GitHub CI/CD pipelines offer numerous ways to cache build files but Airdrop takes it one step further by allowing us to cache the entire build and defining a set of rules to only invalidate this cached build if any of the relevant files changed. This would also be possible in a (lengthy?) bash script, but our goal is to simplify the deploy script, not complicate it further. It also integrates nicely with GitHub Action's cache store, making it possible to cache our front-end build right next to our deploy server, speeding things up quite a bit.
Wrapping things up
We have already started implementing and testing these changes and the first results are promising. Besides the benefits mentioned above, the deployment time has also reduced from 8-10 minutes on an M1 MacBook Pro to slightly over 3 minutes using the default GitHub Actions runner.
Finally, while working on this new deployment approach, we came across another excellent blogpost talking about atomic Laravel deploys using Deployer and GitHub Actions. If you're deploying to a single server, it's worth checking out "Deploying a Laravel Application with Deployer and GitHub Actions" by Chris Page. He goes into more detail about atomic releases and provides some practical code examples to get things set-up.