Metrics on your custom shell scripts
I was backing up my arch desktop and I wanted to get a way to visualize whenever my backups are succeeding / failing, so the stack that I will be using for that:
- Pushgateway, Prometheus and Grafana running on Kubernetes
- Linux host which I want to make backups of
How I am going to do this
I have a NFS Server which is backed by a 3 node replicated GlusterFS cluster which is mounted on my Arch Desktop on /mnt
, so the idea is that every 6 hours I would like to execute a bash script that does a backup of directories and using rsync to back it up to my /mnt
directory which will back it up to NFS.
Once the backup has completed, we send a metric to Pushgateway and then using Grafana to visualize it.
Arch Desktop
I have installed cronie
and rsync
and my mounts look like this:
Then in my scripts directory I have a backup script:
#!/bin/bash
if [[ "$(cat /mnt/status.lock)" == "ok" ]]
then
# Good to backup
echo "lock returned ok, ready to backup"
# Source directories
SRC1=~/workspace
SRC2=~/scripts
SRC3=~/Documents
# Destination directory on NFS server
DEST=/mnt/arch-desktop
# Rsync options
OPTIONS="-av --delete"
# Rsync command to sync directories
rsync $OPTIONS $SRC1 $DEST
rsync $OPTIONS $SRC2 $DEST
rsync $OPTIONS $SRC3 $DEST
# Send metric to pushgateway
echo "backups_completed 1" | curl --silent --data-binary @- "http://pushgateway.sektorlab.xyz/metrics/job/pushgateway-exporter/node/arch-desktop"
else
# Backup failed
# Send metric to pushgateway
echo "backups_completed 0" | curl --silent --data-binary @- "http://pushgateway.sektorlab.xyz/metrics/job/pushgateway-exporter/node/arch-desktop"
fi
First I would like to verify that my mount is mounted, where I stored a file status.lock
with the content ok
, so first I am reading that file from my NFS server, once I can confirm it is present, I continue to backup my directories and once completed I send a request to pushgateway.
You can see that if it fails to read the status file, we send a metric value of 0 to pushgateway, then end metric will look like this on prometheus:
{
__name__="backups_completed",
container="pushgateway",
endpoint="http",
job="pushgateway-exporter",
namespace="monitoring",
node="arch-desktop",
pod="pushgateway-5c58fc86ff-4g2ck",
service="pushgateway"
}
Visualizing on Grafana
On Grafana we can then use the prometheus datasource to query for our backups using something like:
Which looks like this:
Triggering the script
To trigger the script every 6 hours, we can use cron, to add a new entry:
Then add the entry:
And ensure the script has executable permissions:
Now your backups should run every 6 hours.