March 28, 2026 · 9 min read

What is Dead Man's Switch Monitoring? A Developer's Guide

Your cron jobs don't crash. They just stop running. Nobody notices until the backups are three weeks old, the reports are stale, and the cache is ice cold. Dead man's switch monitoring fixes this by flipping the model: instead of checking if something is up, it checks if something happened. Here's how it works and how to set it up.

What is a Dead Man's Switch?

The concept comes from industrial safety. A dead man's switch is a mechanism that activates when the operator stops doing something. A train engineer holds a lever—if they let go (pass out, leave the cab), the train brakes automatically. The system assumes something is wrong unless it's continuously told otherwise.

In software, dead man's switch monitoring works the same way. Instead of a monitoring service actively checking whether your job is running, your job sends a heartbeat ping to the monitoring service every time it completes. If the ping stops arriving within the expected interval, the service triggers an alert.

This is fundamentally different from uptime monitoring. Tools like Pingdom or UptimeRobot check whether a server responds to HTTP requests. But your server can be perfectly healthy while a critical backup script hasn't run in two weeks. Dead man's switch monitoring catches that.

The key insight: Traditional monitoring asks "is this thing responding?" Dead man's switch monitoring asks "did this thing happen?" That's a different question entirely, and for scheduled tasks, it's the right one.

Why Cron Jobs Fail Silently

Cron jobs are uniquely prone to silent failure. A web server crashes and your load balancer notices immediately. A database goes down and your application throws errors. But a cron job? It just doesn't run. There's no crash. There's no error page. There's nothing.

Here are the most common ways cron jobs fail without anyone noticing:

In every one of these cases, your monitoring dashboard stays green. Your server is up. Your application is responding. But your data is stale, your backups are missing, and your reports are wrong.

Common Scenarios Where You Need DMS Monitoring

If any of these are running on a schedule in your infrastructure, they need a dead man's switch:

Backup scripts

Database dumps, file system snapshots, S3 syncs. The worst time to discover your backups haven't been running is when you need to restore from one. A dead man's switch API ping after each successful backup ensures you know within minutes if a backup cycle is missed.

Data sync jobs

ETL pipelines, warehouse loads, API data pulls. When your analytics dashboard shows yesterday's data because the sync job died three days ago, the damage is already done. Decisions were made on stale data.

Report generators

Daily revenue reports, weekly summaries, monthly invoices. These are often fire-and-forget scripts that nobody checks until someone asks "where's the report?" and the answer is "it hasn't run since February."

Cache warmers

Precomputed cache rebuilds that keep your application fast. When the cache warmer stops running, response times gradually degrade. It's a slow burn—not dramatic enough to trigger a latency alert, but enough to hurt user experience over days.

Log rotation and cleanup

Disk space management scripts that compress old logs, delete temp files, and archive old data. When these stop running, disk usage creeps up until something fills up and causes a real outage.

How to Set Up Dead Man's Switch Monitoring with CronPerek

CronPerek is a cron job monitoring API built specifically for dead man's switch monitoring. The setup takes about two minutes. Here's the step-by-step process.

Step 1: Create a monitor

Use the CronPerek API to create a new monitor. Specify a name and the expected interval—how often your job should ping in.

curl -X POST https://cronpeek.web.app/api/v1/monitors \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "nightly-db-backup",
    "interval": 86400,
    "grace_period": 300
  }'

The interval is in seconds (86400 = 24 hours). The grace_period gives your job a 5-minute buffer before triggering an alert, accounting for normal execution time variance.

The response includes your unique monitor ID and ping URL:

{
  "id": "mon_a1b2c3d4e5f6",
  "name": "nightly-db-backup",
  "ping_url": "https://cronpeek.web.app/api/v1/ping/mon_a1b2c3d4e5f6",
  "status": "waiting",
  "interval": 86400,
  "grace_period": 300
}

Step 2: Add the ping to your crontab

Append a curl call to the end of your cron job. Use && so the ping only fires on success:

# Before (unmonitored):
0 2 * * * /home/deploy/scripts/backup-db.sh

# After (monitored with CronPerek):
0 2 * * * /home/deploy/scripts/backup-db.sh && curl -fsS --retry 3 --max-time 10 https://cronpeek.web.app/api/v1/ping/mon_a1b2c3d4e5f6

The flags matter:

Step 3: Verify the first ping

Run your job manually or wait for the next scheduled execution. Check the monitor status:

curl -s https://cronpeek.web.app/api/v1/monitors/mon_a1b2c3d4e5f6 \
  -H "Authorization: Bearer YOUR_API_KEY" | python3 -m json.tool
{
  "id": "mon_a1b2c3d4e5f6",
  "name": "nightly-db-backup",
  "status": "healthy",
  "last_ping": "2026-03-28T02:00:14Z",
  "next_expected": "2026-03-29T02:05:14Z",
  "ping_count": 1
}

Once the status shows healthy, your dead man's switch is armed. If the next ping doesn't arrive by next_expected, CronPerek fires an alert.

Wrapping a multi-step script

For jobs with multiple steps, wrap everything in a script and ping only on full success:

#!/bin/bash
# backup-and-upload.sh
set -euo pipefail

# Step 1: Dump database
pg_dump mydb > /tmp/backup-$(date +%Y%m%d).sql

# Step 2: Compress
gzip /tmp/backup-$(date +%Y%m%d).sql

# Step 3: Upload to S3
aws s3 cp /tmp/backup-$(date +%Y%m%d).sql.gz s3://my-backups/

# Step 4: Cleanup
rm /tmp/backup-$(date +%Y%m%d).sql.gz

# All steps succeeded — ping CronPerek
curl -fsS --retry 3 --max-time 10 \
  https://cronpeek.web.app/api/v1/ping/mon_a1b2c3d4e5f6

The set -euo pipefail ensures the script exits immediately if any step fails, so the ping at the bottom only fires when everything succeeded.

Setting Up Alerts: Webhooks and Email

When a monitor misses its expected ping, CronPerek needs to tell you about it. You can configure both email and webhook notifications.

Email alerts

Add an email notification channel to receive alerts when any monitor goes down:

curl -X POST https://cronpeek.web.app/api/v1/alerts \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "email",
    "address": "ops@yourcompany.com",
    "monitors": ["mon_a1b2c3d4e5f6"]
  }'

Webhook alerts

For integrating with Slack, Discord, PagerDuty, or your own alerting system, use a webhook:

curl -X POST https://cronpeek.web.app/api/v1/alerts \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "webhook",
    "url": "https://hooks.slack.com/services/T00/B00/xxxx",
    "monitors": ["mon_a1b2c3d4e5f6"]
  }'

When a monitor triggers, CronPerek sends a POST request to your webhook URL with a JSON payload:

{
  "event": "monitor.down",
  "monitor": {
    "id": "mon_a1b2c3d4e5f6",
    "name": "nightly-db-backup",
    "status": "down",
    "last_ping": "2026-03-27T02:00:14Z",
    "expected_by": "2026-03-28T02:05:14Z"
  },
  "timestamp": "2026-03-28T02:05:14Z"
}

This means you can route cron job failure alerts to any system that accepts webhooks. Pipe it into Slack, trigger a PagerDuty incident, send an SMS through Twilio, or hit your own API—whatever fits your on-call workflow.

Pricing: CronPerek vs the Competition

Dead man's switch monitoring is not a complex service. It stores timestamps and sends alerts. The pricing should reflect that.

Service Free Tier 50 Monitors Pricing Model
CronPerek 5 monitors $9/mo flat Flat rate per tier
Cronitor 1 monitor ~$100/mo ~$2/monitor/mo
Dead Man's Snitch 1 snitch $199/mo Tiered plans
Healthchecks.io 20 checks $20/mo Tiered plans
$9/mo flat for 50 monitors

Cronitor's per-monitor pricing means you pay more as your infrastructure grows. At $2 per monitor per month, 50 cron jobs costs $100/mo. CronPerek charges a flat $9/mo for up to 50 monitors—same dead man's switch API, same heartbeat pings, same alerts. The difference is $91/mo, or over $1,000 per year.

For solo developers and small teams, that's the difference between monitoring every job and monitoring only the ones you think are "important enough." The jobs you skip are exactly the ones that bite you later.

Stop choosing which cron jobs deserve monitoring. At $9/mo for 50 monitors, you can cover every backup, every sync, every report, every cleanup script. Monitor everything. Sleep better.

Quick Start: Your First Monitor in 60 Seconds

Here's the complete setup, start to finish:

# 1. Create a monitor
curl -X POST https://cronpeek.web.app/api/v1/monitors \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"name": "daily-backup", "interval": 86400, "grace_period": 300}'

# 2. Copy the ping URL from the response and add to your crontab:
#    0 2 * * * /path/to/backup.sh && curl -fsS --retry 3 https://cronpeek.web.app/api/v1/ping/YOUR_MONITOR_ID

# 3. Set up an alert
curl -X POST https://cronpeek.web.app/api/v1/alerts \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"type": "email", "address": "you@example.com"}'

# Done. If your backup doesn't ping tomorrow, you'll know.

Start monitoring your cron jobs today

Free tier includes 5 monitors. No credit card required. Set up dead man's switch monitoring in under 2 minutes.

Get started free →