Monitor your website with cloud functions

Monitor your website with cloud functions

. 4 min read

A while back I noticed that my blog was down, and after some investigation I discovered that it had been down for a few days. The blog is hosted on a virtual machine I'm running for free, set up in a VM pool which is run by some friends of mine. While this is great, they aren't really providing any uptime guarantee, and they aren't necessarily always quick to notice when there are problems.

I already had some experience with Uptime Robot - a service that continuously monitors websites and can call or send an SMS whenever there is a problem reaching it. However, it costs money. As does all the alternatives I've found - at least for the SMS options. Email notifications are however free, but I'm not consistent enough in checking my email for it to be a satisfying reporting tool. Sometimes it can go a few days between each log in, and I'd like to be notified asap if my precious blog (and a few other services I'm running) goes down.

I decided that this would be a good opportunity to test out some cloud platform functionalities I haven't had the chance to use yet. Looking around, I discovered that Google Cloud Platform has a scheduler that can be configured to run at any interval, and use this to trigger a cloud function that can perform the health check. Integrating this with Pushbullet enables me to get notifications on my phone whenever a service I'm monitoring has issues.

How

In this small tutorial, I'll use GCP (Google Cloud Platform), but I'm sure both AWS and Azure can be set up similarly within their free tier. To access the GCP, we go to console.cloud.google.com. Log in with your Google account and accept terms and conditions.

First, we need to set up the message pipeline. Find Pub/Sub in the left menu, and click the link. Then, create a new topic for these messages.

Note the full topic name when creating the topic

Then, go back to the left menu and click Cloud Functions. Set up a new one following these settings:

The cloud function is written in Python, and is a pretty basic HTTP status checker:

import os
import requests
from pushbullet import Pushbullet

pb = Pushbullet(os.environ['PUSHBULLET_ACCESS_TOKEN'])

def check_status(event, context):
     services_down = []
     
     # IRC
     irc_r = requests.head("https://irc.blixhavn.dev")
     if irc_r.status_code != 200:
          services_down.append(f"Irc service seems to be down ({irc_r.status_code})")

     # Blog
     blog_r = requests.head("https://blixhavn.dev")
     if blog_r.status_code != 200:
          services_down.append(f"Blog seems to be down ({blog_r.status_code})")
     
     if services_down:
          pb.push_note("Uptime error", "\n".join(services_down))

With the following requirements.txt

# Function dependencies, for example:
# package>=version
pushbullet-cli>=1.0

Basically, I send a HEAD request (same as a regular page load, only that I don't care about the actual content) to each of the services I want to monitor, and send a Pushbullet message if any of them return a non-200 status code. The Pushbullet message will then be displayed on all devices I have configured - in my case, my phone and computers.

Google Cloud Platform also allows you to configure some environment variables that are available for the running scripts. Here, I've used this to set PUSHBULLET_ACCESS_TOKEN to contain my Pushbullet access token, which I acquired following this documentation.

Scheduling

Creating the cloud function is half the setup - the other half is running it regularly. This was achieved using the Cloud Scheduler, which I configured to send a message every five minutes (tip: crontab.guru can help you create your own schedule).

Again, the topic needs to match the one we've used before.

Testing the setup

If you've done all the steps correctly, your uptime monitor should now be running. To test, you could either shut down whatever you're monitoring for five minutes, or you could temporarily modify the cloud function to test a bogus URL which will return 404 (or anything non-200). You can also trigger the function either in the Cloud Functions or the Cloud Scheduler panel, if you don't want to wait. The logs can also be useful to debug in case nothing seems to be happening. These can be found under "Logging" in the left menu.

"Test function" or "Run now"

Some numbers

At the moment, Google Cloud Platforms free tier allows for 2 million invocations,  400,000 GB-seconds, 200,000 GHz-seconds  and 5 GB of Internet egress traffic per month. I haven't been able to find any information on how much CPU is spent on running the function, but lets look into the other limits.

Dividing the memory limit by the allocated memory gives us 3,125,000‬ running seconds, which is more than one month (~2.6 million seconds). So, no limit here. As for the 2 million invocations, this translates to approximately 1 request pr 1.3 seconds. However, my logs indicate that the script takes 3 seconds to run, meaning that less than 1 million invocations are theoretically possible. The HEAD responses are less than half a kilobyte, so 5 GB of traffic allows for at least 10 million requests per month. As such, we can assume that the cloud function can actually be run continuously within the free tier. However, note that if your service goes down, you will receive push notifications with the same interval as you're checking the status. Addressing this would require the function to maintain some sort of state, which adds a new layer of complexity. Therefore, I landed on five minutes as a good compromise.

So there you have it. A simple uptime monitor using free cloud functions. If you have any ideas for improvements, or other input, let me know :)