Leveraging Cloudflare Workers for Edge API Authentication
Currently, there is at least a ±70ms latency between the EU and US data centers, which is unlikely to improve significantly due to the limitations imposed by the speed of light and other annoying physics-based factors. The internet could be a lot faster if we took all the people and servers and put them on the same continent but that brings other issues (where would I get my free healthcare?). Fortunately, edge computing allows you to bring your web app closer to your customers without having to organise a mass relocation.
Cloudflare and AWS can leverage their existing network and datacenters all over the world to bring your application to edge locations, as close to your users as possible. In this article, we'll explore how Cloudflare Workers can be used to deploy a small serverless function that performs API authentication on the edge, providing an effective solution to drop unauthorised and invalid requests in a high-traffic environment like Flare.
Why perform API auth on edge at all?
Flare receives a lot of invalid requests. About 40% of all incoming API requests contain invalid, expired or no API keys. In an ideal world, 100% of our CPU time (and budget) should go to handling valid requests from paying customers and not to rejecting unauthorised requests. By placing a cheap and fast edge function in front of our API, we can reject 40% of incoming traffic before it ever hits our ingress servers.
Keep in mind that we’re authenticating the incoming API requests on our own servers as well. Performing authentication twice may appear counter-intuitive, but the actual API endpoints remain publicly accessible if you manage to bypass the edge function and access the API directly. Additionally, keeping the authentication in the application code keeps the application nice and simple to run and debug locally.
Choosing Cloudflare Workers
After careful consideration (read: a panicked evening during a suspected DDoS attack) we decided on using Cloudflare’s excellent (and free) DDoS protection and WAF. When the time came to investigate serverless functions for the aforementioned API auth, we quickly decided on Cloudflare Workers too. They proved to be a good choice with instant cold starts, reasonable pricing, global availability (thanks to their edge-based nature), and easy integration with existing infrastructure on Cloudflare.
Let’s write a basic authentication Worker
Let’s put the theory into practice. Like the name suggests, an edge function is not much more than a function. The code for a very basic Cloudflare Worker looks like this:
export default {
async fetch(request: Request): Promise<Response> {
return new Response('Hello world!');
}
};
To authenticate the API request here, we should check the API key in the request.headers
. If everything checks out we can use the fetch
API to send the request to our actual API endpoint:
export default {
async fetch(request: Request): Promise<Response> {
const apiKey = request.headers.get('X-Api-Key');
if (! authenticate(apiKey)) {
return new Response('Invalid API key', { status: 403 });
}
// forward the `request` to example.com/api
return fetch('https://example.com/api', request);
}
};
Cloudflare Workers KV
To authenticate the apiKey
, it might be tempting to connect to your MySQL database and check the appropriate tables. However, because we’re executing this code on the edge, the database might be many milliseconds away, resulting in slow queries! Additionally, opening a many new database connection for all incoming API requests would undermine the purpose of having API authentication on the edge anyways.
Instead of hitting the database directly, we can leverage Cloudflare Workers KV, a low-latency and globally synced key-value store. It’s also accessible from the Cloudflare API, allowing us to sync a list of all active API keys to Workers KV, directly from a CRON job in our application. Accessing the Workers KV store from our Cloudflare Worker is made really easy with KV bindings:
export default {
async fetch(
request: Request,
env: Env
): Promise<Response> {
// env.ApiKeys is bound to the KV store:
const apiKeys = await env.ApiKeys.get<string[]>('api_keys', {type: 'json'}) || [];
const apiKey = request.headers.get('X-Api-Key');
if (apiKeys.includes(apiKey)) {
return new Response('Invalid API key', {status: 403});
}
return fetch('https://example.com/api', request);
}
};
Additionally, Cloudflare Worker cache can be used to decrease KV costs and latency even more.
Getting ready for production
Of course, there’s more to it than just these 20 lines of code. For Flare, we have additional code in place to handle CORS requests, cache API keys using Worker cache, and perform extra validation.
To deploy our Cloudflare Worker, we can use Cloudflare’s wrangler
CLI. Using a simple config file and the wrangler
command, it’s really easy to deploy to multiple environments with different environment variables, run the function locally, or tail the production logs. The config file also defines the routes
that the Worker function will be active on. This allows us to essentially use the Worker as a middleware for an existing route.
I swear this is not a sales pitch
Cloudflare deserves more credit for everything they offer in their free tier. With DDoS protection, a basic WAF, and an abundance of free Worker function invocations (they're practically throwing them at us), load on our ingress servers has drastically reduced. Additionally, we now have the ability to more easily manage and route traffic through code, should we feel the need.