How to Monitor Node.js Cron Jobs with node-cron and CronPeek API
You wrote a Node.js scheduled task with node-cron. It worked in development. You deployed it. Three weeks later, someone asks why the nightly report stopped generating. The process crashed silently after a memory spike and nobody noticed. This guide shows you how to add dead man's switch monitoring to every Node.js scheduling pattern—node-cron, Bull, BullMQ, Agenda, PM2, and Docker containers—so you never discover a broken job by accident again.
Why Cron Job Monitoring Matters in Node.js
Node.js scheduled tasks fail differently than traditional Unix cron jobs. A Unix cron daemon restarts independently of your application. But a node-cron job runs inside your Node.js process. If that process crashes, restarts, or runs out of memory, every scheduled task inside it dies with it. There is no separate scheduler keeping track.
Here are the failure modes that catch Node.js developers off guard:
- Silent process crashes. An unhandled promise rejection or a memory leak kills the process. If you are running without a process manager, the scheduler is simply gone. No error log. No alert. Nothing runs.
- Missed runs from schedule drift. Your job is scheduled for every 5 minutes, but the previous run takes 7 minutes. Depending on how your scheduler handles overlapping executions, you might skip runs or stack them up until the process chokes.
- Deployment gaps. During a deploy, the old process shuts down and the new one starts up. If your deploy takes 30 seconds and your job was supposed to run during that window, it is silently skipped. Rolling deploys make this worse—some instances have the old schedule, others have the new one.
- Environment differences. The job depends on an environment variable, a database connection string, or an API key that exists in staging but not in production. The scheduler starts, the job fires, and it fails with a cryptic error that gets swallowed by a catch block.
- Timezone bugs. You scheduled the job for 2 AM, but the server is in UTC and your users are in Eastern. The job runs at the wrong time. Or, after a daylight saving transition, it runs twice or not at all.
Traditional uptime monitoring does not catch any of these. Your server responds to health checks. Your API returns 200. But your scheduled tasks are dead and your data is going stale.
The core problem: Node.js cron jobs fail by not running. There is no crash page, no 500 error, no stack trace in your APM. A dead man's switch is the only pattern that detects the absence of an event—which is exactly how scheduled tasks fail.
How Dead Man's Switch Monitoring Works
The concept is simple. After your scheduled task completes, it sends an HTTP ping to a monitoring service. The monitoring service expects that ping at regular intervals. If the ping stops arriving, the service assumes the job is dead and sends you an alert.
With CronPeek, the flow is:
- Create a monitor via the API with a name and expected interval.
- Ping the monitor after each successful job execution.
- Get alerted if the ping does not arrive within the expected window plus a grace period.
The API base URL is https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi. You create monitors with POST /monitors and send heartbeats with GET /ping/:token, POST /ping/:token, or HEAD /ping/:token—any HTTP method works.
Setting Up node-cron with CronPeek
The node-cron library is the most popular cron scheduler for Node.js, with over 1.5 million weekly downloads. Here is how to add dead man's switch monitoring to a node-cron task.
Step 1: Create a monitor
First, create a monitor in CronPeek for your scheduled task. This gives you a unique ping token.
Create monitor via APIcurl -X POST https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/monitors \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "nightly-report-generator",
"interval": 86400,
"grace_period": 600
}'
The response includes your ping token:
{
"id": "mon_x7k9m2p4q1",
"name": "nightly-report-generator",
"ping_url": "https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_x7k9m2p4q1",
"status": "waiting",
"interval": 86400,
"grace_period": 600
}
Step 2: Add the heartbeat ping to your job
After your task logic completes successfully, send an HTTP request to the ping URL. Use the built-in fetch() available in Node.js 18+ or install node-fetch for older versions.
import cron from 'node-cron';
const CRONPEEK_PING = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_x7k9m2p4q1';
// Run every day at 2:00 AM
cron.schedule('0 2 * * *', async () => {
try {
console.log('[report] Starting nightly report generation...');
// Your actual job logic
await generateNightlyReport();
await uploadToS3();
await notifyStakeholders();
// All steps succeeded — ping CronPeek
await fetch(CRONPEEK_PING, {
method: 'POST',
signal: AbortSignal.timeout(10000) // 10s timeout
});
console.log('[report] Completed and pinged CronPeek.');
} catch (err) {
console.error('[report] Failed:', err.message);
// Do NOT ping on failure — CronPeek will alert on missed heartbeat
}
});
The key detail: the ping is inside the try block, after all task steps. If any step throws, the catch block runs and the ping never fires. CronPeek then sees a missed heartbeat and alerts you.
Handling Job Failures—Ping with Failure Status
Waiting for a missed heartbeat works, but it introduces a delay equal to your interval plus grace period. For critical jobs, you want to know about failures immediately. CronPeek supports pinging with a failure status so you get an instant alert.
Immediate failure reportingimport cron from 'node-cron';
const CRONPEEK_BASE = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_x7k9m2p4q1';
async function pingCronPeek(status = 'success', message = '') {
try {
const url = new URL(CRONPEEK_BASE);
url.searchParams.set('status', status);
if (message) url.searchParams.set('msg', message.slice(0, 256));
await fetch(url.toString(), {
method: 'POST',
signal: AbortSignal.timeout(10000)
});
} catch (pingErr) {
console.error('[cronpeek] Ping failed:', pingErr.message);
}
}
cron.schedule('*/30 * * * *', async () => {
const start = Date.now();
try {
await syncInventoryData();
const duration = Date.now() - start;
// Success — heartbeat with timing info
await pingCronPeek('success', `completed in ${duration}ms`);
} catch (err) {
// Failure — immediate alert
await pingCronPeek('failure', err.message);
}
});
This gives you two layers of protection. If the job runs and fails, you get an immediate failure alert. If the entire process crashes and the job never runs at all, you get a missed heartbeat alert after the grace period.
Using axios instead of fetch
If your project already uses axios, the ping is straightforward:
Heartbeat with axiosimport axios from 'axios';
const CRONPEEK_PING = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_x7k9m2p4q1';
async function heartbeat() {
await axios.post(CRONPEEK_PING, null, { timeout: 10000 });
}
// In your cron callback:
// await doWork();
// await heartbeat();
Monitoring Bull and BullMQ Queue Workers
BullMQ and its predecessor Bull are the standard for reliable job queues in Node.js. They use Redis as a backend and support repeatable (cron-like) jobs. The problem is the same: if the worker process dies, repeatable jobs stop processing and nobody knows.
BullMQ repeatable jobs
BullMQ with CronPeek monitoringimport { Queue, Worker } from 'bullmq';
const CRONPEEK_PING = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_q8w2e5r1';
const connection = { host: '127.0.0.1', port: 6379 };
// Create a queue with a repeatable job (every hour)
const queue = new Queue('reports', { connection });
await queue.add('hourly-digest', {}, {
repeat: { pattern: '0 * * * *' } // every hour
});
// Worker processes the job
const worker = new Worker('reports', async (job) => {
await buildHourlyDigest(job.data);
}, { connection });
// Monitor completions — ping CronPeek on success
worker.on('completed', async (job) => {
if (job.name === 'hourly-digest') {
try {
await fetch(CRONPEEK_PING, { method: 'POST', signal: AbortSignal.timeout(10000) });
console.log(`[bullmq] Job ${job.id} completed, pinged CronPeek`);
} catch (err) {
console.error('[bullmq] CronPeek ping failed:', err.message);
}
}
});
// Monitor failures — ping with failure status
worker.on('failed', async (job, err) => {
if (job.name === 'hourly-digest') {
try {
await fetch(`${CRONPEEK_PING}?status=failure&msg=${encodeURIComponent(err.message.slice(0, 256))}`, {
method: 'POST', signal: AbortSignal.timeout(10000)
});
} catch (pingErr) {
console.error('[bullmq] CronPeek failure ping failed:', pingErr.message);
}
}
});
Legacy Bull (v3/v4)
Bull uses a similar event API. The main difference is how repeatable jobs are defined:
Bull v4 with CronPeekimport Queue from 'bull';
const queue = new Queue('cleanup', 'redis://127.0.0.1:6379');
const CRONPEEK_PING = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_t3y6u8i2';
// Add repeatable job
queue.add({}, { repeat: { cron: '0 3 * * *' } });
// Process
queue.process(async (job) => {
await cleanupOldRecords();
});
// Ping on completion
queue.on('completed', async () => {
await fetch(CRONPEEK_PING, { method: 'POST', signal: AbortSignal.timeout(10000) });
});
queue.on('failed', async (job, err) => {
await fetch(`${CRONPEEK_PING}?status=failure&msg=${encodeURIComponent(err.message)}`, {
method: 'POST', signal: AbortSignal.timeout(10000)
});
});
Bull/BullMQ tip: If your worker runs on a separate server from your API, the worker process crashing is the most common failure mode. The jobs stay in Redis waiting to be processed, but no worker picks them up. A CronPeek heartbeat on the worker catches this immediately.
Monitoring PM2 cron_restart Scheduled Processes
PM2 is a production process manager for Node.js. It has a cron_restart option that restarts your application on a cron schedule—useful for tasks that run once and exit. The problem: PM2 restarts the process, but has no idea whether the process actually did its job successfully.
module.exports = {
apps: [{
name: 'invoice-generator',
script: './jobs/generate-invoices.mjs',
cron_restart: '0 6 1 * *', // 1st of every month at 6 AM
autorestart: false, // don't restart on exit
watch: false
}]
};
jobs/generate-invoices.mjs
const CRONPEEK_PING = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_p5o3i7u9';
async function main() {
try {
console.log('[invoices] Generating monthly invoices...');
const count = await generateAllInvoices();
console.log(`[invoices] Generated ${count} invoices.`);
// Ping CronPeek on success
await fetch(CRONPEEK_PING, {
method: 'POST',
signal: AbortSignal.timeout(10000)
});
console.log('[invoices] Pinged CronPeek.');
} catch (err) {
console.error('[invoices] FAILED:', err);
// Ping with failure status for immediate alert
await fetch(`${CRONPEEK_PING}?status=failure&msg=${encodeURIComponent(err.message.slice(0, 256))}`, {
method: 'POST',
signal: AbortSignal.timeout(10000)
}).catch(() => {});
process.exit(1);
}
}
main().then(() => process.exit(0));
Set the CronPeek monitor interval to match your cron_restart schedule. For a monthly job, set interval to 2678400 (31 days) with a generous grace_period. If PM2 fails to restart the process—which happens after server reboots if PM2 was not saved with pm2 save—CronPeek catches it.
Docker Container Cron Monitoring
Running Node.js cron jobs inside Docker containers adds another failure layer. The container can be killed by the orchestrator, OOM-killed by the kernel, or simply not restarted after a host migration. Here is how to monitor both the container health and the job execution.
Application-level heartbeat
Add CronPeek pings inside your Node.js code exactly as shown in the node-cron examples above. This monitors whether the job actually runs and succeeds.
Docker HEALTHCHECK
Add a HEALTHCHECK to your Dockerfile so Docker (and orchestrators like ECS, Kubernetes, or Swarm) knows if the container is alive:
DockerfileFROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
# Healthcheck: verify the node process is running
HEALTHCHECK --interval=60s --timeout=10s --retries=3 \
CMD node -e "process.exit(0)" || exit 1
CMD ["node", "scheduler.js"]
Combined pattern: container + application monitoring
For production, you want both layers. The Docker HEALTHCHECK catches container-level failures (OOM, crashes). The CronPeek heartbeat catches application-level failures (job logic errors, stuck processes, scheduling bugs).
scheduler.js — full Docker exampleimport cron from 'node-cron';
import { createServer } from 'http';
const CRONPEEK_PING = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_d4c8k2f6';
let lastRunSuccess = false;
let lastRunTime = null;
// Health endpoint for Docker HEALTHCHECK
const server = createServer((req, res) => {
if (req.url === '/health') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
status: 'running',
lastRun: lastRunTime,
lastSuccess: lastRunSuccess
}));
} else {
res.writeHead(404);
res.end();
}
});
server.listen(8080);
// Scheduled job
cron.schedule('*/15 * * * *', async () => {
try {
await processQueuedEmails();
lastRunSuccess = true;
lastRunTime = new Date().toISOString();
await fetch(CRONPEEK_PING, {
method: 'POST',
signal: AbortSignal.timeout(10000)
});
} catch (err) {
lastRunSuccess = false;
lastRunTime = new Date().toISOString();
await fetch(`${CRONPEEK_PING}?status=failure&msg=${encodeURIComponent(err.message.slice(0, 256))}`, {
method: 'POST',
signal: AbortSignal.timeout(10000)
}).catch(() => {});
}
});
Update the Dockerfile HEALTHCHECK to use the HTTP endpoint:
HEALTHCHECK --interval=60s --timeout=10s --retries=3 \
CMD wget -q --spider http://localhost:8080/health || exit 1
Docker Compose and Kubernetes: In Docker Compose, add healthcheck to your service definition. In Kubernetes, use a livenessProbe hitting /health and a separate CronPeek monitor for each scheduled task. The probes keep the pod alive; CronPeek verifies the jobs actually run.
Monitoring Agenda.js Scheduled Jobs
Agenda uses MongoDB as its job store, which means jobs persist across process restarts. But persistence is not monitoring. If the Agenda worker stops processing jobs (connection pool exhausted, MongoDB failover, worker crash), the jobs pile up in the database with no alert.
Agenda with CronPeek heartbeatimport { Agenda } from 'agenda';
const agenda = new Agenda({ db: { address: process.env.MONGO_URI } });
const CRONPEEK_PING = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_a2g5e8n1';
agenda.define('daily-cleanup', async (job) => {
await cleanupExpiredSessions();
await purgeOldLogs();
// Ping on success
await fetch(CRONPEEK_PING, {
method: 'POST',
signal: AbortSignal.timeout(10000)
});
});
await agenda.start();
await agenda.every('24 hours', 'daily-cleanup');
Pricing: CronPeek vs Cronitor and Others
If you are running a typical Node.js application, you probably have between 5 and 50 scheduled tasks. Database cleanups, cache warmers, report generators, email queues, webhook retries, data syncs. Here is what monitoring all of them costs:
| Service | Free Tier | 50 Monitors | Unlimited |
|---|---|---|---|
| CronPeek | 5 monitors | $9/mo | $29/mo |
| Cronitor | 1 monitor | ~$100/mo | Custom |
| Dead Man's Snitch | 1 snitch | $199/mo | Custom |
| Healthchecks.io | 20 checks | $20/mo | $80/mo |
| Better Uptime | 5 heartbeats | $85/mo | Custom |
Cronitor charges roughly $2 per monitor per month. At that rate, 50 cron jobs costs $100/mo—$1,200/year. CronPeek's Starter plan covers the same 50 monitors for $9/mo—$108/year. That is a 91% cost reduction for identical functionality: heartbeat pings, missed-run alerts, webhook notifications.
For teams that need more, the Pro plan at $29/mo gives you unlimited monitors. No per-monitor pricing, no surprise bills as your infrastructure grows. Add a monitor for every Bull queue, every node-cron task, every PM2 process, every Docker container. Monitor everything.
The math is simple. Cronitor for 50 monitors: $1,200/year. CronPeek for 50 monitors: $108/year. Same API, same alerts, same reliability. Spend the $1,092 difference on actual infrastructure.
A Reusable CronPeek Helper Module
If you have multiple scheduled tasks in your application, extract the monitoring logic into a shared module:
lib/cronpeek.mjsconst CRONPEEK_API = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi';
/**
* Wraps a cron job function with CronPeek dead man's switch monitoring.
* @param {string} token - The CronPeek monitor ping token
* @param {Function} fn - The async job function to execute
* @returns {Function} - Wrapped function that pings CronPeek on completion
*/
export function withMonitor(token, fn) {
return async (...args) => {
const start = Date.now();
try {
const result = await fn(...args);
const duration = Date.now() - start;
await fetch(`${CRONPEEK_API}/ping/${token}?status=success&msg=${encodeURIComponent(`OK in ${duration}ms`)}`, {
method: 'POST',
signal: AbortSignal.timeout(10000)
}).catch(() => {});
return result;
} catch (err) {
await fetch(`${CRONPEEK_API}/ping/${token}?status=failure&msg=${encodeURIComponent(err.message.slice(0, 256))}`, {
method: 'POST',
signal: AbortSignal.timeout(10000)
}).catch(() => {});
throw err; // re-throw so your error handling still works
}
};
}
Now adding monitoring to any job is a one-liner:
Using the helperimport cron from 'node-cron';
import { withMonitor } from './lib/cronpeek.mjs';
cron.schedule('0 * * * *', withMonitor('mon_x7k9m2p4q1', async () => {
await syncUserData();
}));
cron.schedule('0 2 * * *', withMonitor('mon_q8w2e5r1', async () => {
await generateDailyReport();
}));
cron.schedule('*/5 * * * *', withMonitor('mon_t3y6u8i2', async () => {
await processWebhookQueue();
}));
Three jobs, three monitors, zero boilerplate in each job function.
Quick Start: Your First Node.js Monitor
Here is the complete setup in under 2 minutes:
Terminal# 1. Create a monitor
curl -X POST https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/monitors \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"name": "my-node-job", "interval": 3600, "grace_period": 300}'
# Response includes: "ping_url": ".../ping/mon_YOUR_TOKEN"
# 2. Add the ping to your Node.js code:
# await fetch('https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_YOUR_TOKEN', { method: 'POST' })
# 3. Set up an alert
curl -X POST https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/alerts \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"type": "email", "address": "you@example.com"}'
# Done. If your job stops running, you'll know within minutes.
Start monitoring your Node.js cron jobs
Free tier includes 5 monitors. No credit card required. Works with node-cron, Bull, BullMQ, Agenda, PM2, and any Node.js scheduler.
Get started free →