How to Monitor Any Scheduled Task via API
Cron is just one scheduler. Modern infrastructure runs scheduled work through GitHub Actions, Airflow, systemd timers, Kubernetes CronJobs, AWS EventBridge, GCP Cloud Scheduler, and Windows Task Scheduler. Here's how to monitor all of them with a single API endpoint.
The Fragmentation Problem
Ten years ago, most scheduled tasks lived in a single crontab on a single server. Monitoring meant checking one place. Today, a typical team runs scheduled work across five or six different systems:
- A crontab on the database server for backups
- GitHub Actions for nightly CI builds and dependency checks
- Airflow for data pipelines
- AWS EventBridge for serverless event processing
- systemd timers for on-host maintenance scripts
- Kubernetes CronJobs for containerized batch work
Each of these schedulers has its own logging, its own failure modes, and its own (often inadequate) notification system. GitHub Actions sends an email if a workflow fails—but not if you accidentally disable it. EventBridge logs to CloudWatch—but only if you set up the log group. Airflow has built-in alerting—but configuring it correctly requires understanding its executor model.
The result is that monitoring is scattered across half a dozen dashboards, and the failure modes you actually care about—the task silently stopped running—are the ones most likely to slip through.
One Endpoint, Every Scheduler
A dead man's switch API like CronPerek solves this by providing a single, scheduler-agnostic monitoring layer. The contract is simple: your task makes an HTTP GET request to a unique URL after each successful run. If the request doesn't arrive within the expected interval, you get alerted.
This works with any scheduler because the only requirement is the ability to make an HTTP call—and every platform listed above can do that. Here's how to integrate with each one.
Linux Crontab
The classic. Append a curl call to the end of your crontab entry:
# Nightly backup with monitoring
0 2 * * * /home/deploy/scripts/backup-db.sh && curl -fsS --retry 3 https://cronpeek.web.app/api/v1/ping/MON_backup
GitHub Actions (Scheduled Workflows)
Add a final step to your scheduled workflow that pings CronPerek:
name: Nightly Tests
on:
schedule:
- cron: '0 3 * * *'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci && npm test
- name: Ping CronPerek
if: success()
run: curl -fsS --retry 3 https://cronpeek.web.app/api/v1/ping/MON_nightly_tests
The if: success() condition ensures the ping only fires when all previous steps pass. If a test fails, no ping is sent, and CronPerek alerts you. More importantly, if GitHub Actions has an outage and the workflow never runs, you still get alerted.
Apache Airflow DAGs
Use a SimpleHttpOperator or a PythonOperator as the final task in your DAG:
from airflow.providers.http.operators.http import SimpleHttpOperator
ping_monitor = SimpleHttpOperator(
task_id='ping_cronperek',
http_conn_id='cronperek_api',
endpoint='/api/v1/ping/MON_etl_pipeline',
method='GET',
dag=dag,
)
# Set as downstream of your final task
transform_data >> load_to_warehouse >> ping_monitor
Or with a simple Python callable:
import requests
def ping_cronperek():
requests.get(
"https://cronpeek.web.app/api/v1/ping/MON_etl_pipeline",
timeout=10
)
ping_task = PythonOperator(
task_id='ping_cronperek',
python_callable=ping_cronperek,
dag=dag,
)
systemd Timers
systemd timers are the modern replacement for cron on Linux. Add the ping to your service unit's ExecStartPost directive, or append it to your script:
# /etc/systemd/system/cleanup.service
[Unit]
Description=Log cleanup
[Service]
Type=oneshot
ExecStart=/usr/local/bin/cleanup-logs.sh
ExecStartPost=/usr/bin/curl -fsS --retry 3 https://cronpeek.web.app/api/v1/ping/MON_log_cleanup
ExecStartPost only runs if ExecStart succeeds, so the behavior matches the crontab && pattern.
AWS EventBridge + Lambda
For serverless scheduled tasks, add the ping at the end of your Lambda function:
import urllib.request
def handler(event, context):
# Your actual scheduled work
process_daily_reports()
# Ping CronPerek on success
urllib.request.urlopen(
"https://cronpeek.web.app/api/v1/ping/MON_daily_reports",
timeout=10
)
return {"statusCode": 200}
This catches Lambda failures, EventBridge misconfigurations, and IAM permission issues that prevent the function from being invoked at all.
GCP Cloud Scheduler
Google Cloud Scheduler can call HTTP endpoints directly. Create a second Cloud Scheduler job that pings CronPerek, or add the ping to the end of your Cloud Function / Cloud Run job:
const https = require('https');
exports.nightlySync = async (req, res) => {
await syncDataToWarehouse();
// Ping on success
await new Promise((resolve, reject) => {
https.get('https://cronpeek.web.app/api/v1/ping/MON_nightly_sync', (r) => {
resolve();
}).on('error', reject);
});
res.status(200).send('OK');
};
Windows Task Scheduler
Add a curl or Invoke-WebRequest call to the end of your PowerShell script:
# backup-database.ps1
try {
& "C:\Scripts\backup-sqlserver.ps1"
# Ping CronPerek on success
Invoke-WebRequest -Uri "https://cronpeek.web.app/api/v1/ping/MON_sql_backup" `
-TimeoutSec 10 -UseBasicParsing | Out-Null
}
catch {
Write-Error $_.Exception.Message
exit 1
}
Windows users often overlook monitoring because Task Scheduler's built-in alerting was removed in Windows 10. A dead man's switch API fills that gap completely.
Why Platform-Native Alerting Isn't Enough
Every scheduler listed above has some form of built-in notification. So why use an external monitoring API?
| Platform | Built-in Alerting | Catches "Didn't Run"? |
|---|---|---|
| Linux crontab | MAILTO (local mail) | No |
| GitHub Actions | Email on failure | No (disabled workflow = no alert) |
| Airflow | Email, Slack callbacks | Partial (scheduler crash = no alert) |
| AWS EventBridge | CloudWatch Alarms | Requires custom metric setup |
| GCP Cloud Scheduler | Stackdriver logging | Requires log-based alerting |
| systemd timers | journalctl | No |
| Windows Task Scheduler | Event Log | No |
| Dead man's switch API | Email, webhook | Yes — always |
The universal gap: Platform-native alerting tells you when a task fails. A dead man's switch tells you when a task doesn't run. These are different failure modes, and the second one is harder to detect and usually more damaging.
Centralized Visibility Across Schedulers
Beyond catching silent failures, routing all your scheduled tasks through a single monitoring API gives you one place to answer the question: "Is everything running?"
Instead of checking GitHub Actions, then Airflow, then CloudWatch, then your server's crontab, you look at one dashboard or one API response. Every task across every platform reports to the same place, with the same alerting rules, the same notification channels, and the same grace period logic.
CronPerek monitors are just labeled ping endpoints. They don't know or care what scheduler is behind them. A monitor named prod-db-backup might be a crontab today and a Kubernetes CronJob tomorrow—no monitoring configuration changes required. You migrate the scheduler, update where the ping comes from, and everything else stays the same.
Getting Started
CronPerek's free tier includes 5 monitors—enough to cover the most critical scheduled tasks across your infrastructure. The Pro plan ($9/mo for 50 monitors) typically covers an entire small-to-medium deployment. For teams running hundreds of scheduled tasks, the Business plan ($29/mo) removes all limits.
Setup takes about 2 minutes per task: create a monitor, copy the ping URL, add a single HTTP call to the end of your job. No agents, no SDKs, no platform-specific integrations to maintain.
One API for every scheduler
Monitor crontabs, GitHub Actions, Airflow, Lambda, systemd timers, and more from a single dashboard. Free tier includes 5 monitors.
Get started free →