Monitor Cron Jobs and Scheduled Tasks in Rust with CronPeek

Published March 29, 2026 · 12 min read · By CronPeek

Rust gives you memory safety, zero-cost abstractions, and no garbage collector pauses. What it does not give you is immunity from silent cron failures. A scheduled task that panics inside a tokio::spawn produces a single log line that nobody reads at 3 AM. A background job blocked on a poisoned mutex holds the thread forever while your HTTP server keeps passing health checks. The process is alive. The work is not happening.

Dead man's switch monitoring catches exactly this failure mode. Your Rust scheduled task pings an external endpoint after every successful run. If the ping stops arriving, you get an alert. This guide walks through integrating CronPeek heartbeat monitoring with reqwest, tokio, tokio-cron-scheduler, Actix-web, and Axum — the most common Rust stack combinations for services that run scheduled work.

Why Rust Services Need Cron Monitoring

Rust eliminates entire classes of bugs at compile time. But scheduled tasks fail for reasons the borrow checker cannot prevent:

None of these trigger traditional uptime monitors. You need a system that detects when something stops happening.

Quick Start: reqwest Ping to CronPeek

The simplest integration is a single HTTP GET after your task completes. Add reqwest and tokio to your Cargo.toml:

[dependencies]
reqwest = { version = "0.12", features = ["json"] }
tokio = { version = "1", features = ["full"] }

Create a reusable ping function that reports success or failure to CronPeek:

use reqwest::Client;
use std::time::Duration;

const CRONPEEK_BASE: &str = "https://cronpeek.web.app/api/v1/ping";

/// Ping CronPeek after a job run.
/// Pass `None` for success, or `Some(error_msg)` for failure.
async fn cronpeek_ping(
    client: &Client,
    monitor_id: &str,
    error: Option<&str>,
) -> Result<(), reqwest::Error> {
    let url = match error {
        Some(_) => format!("{}/{}/fail", CRONPEEK_BASE, monitor_id),
        None => format!("{}/{}", CRONPEEK_BASE, monitor_id),
    };

    client
        .get(&url)
        .timeout(Duration::from_secs(5))
        .send()
        .await?;

    Ok(())
}

Usage is straightforward. After your task logic runs, call the ping:

#[tokio::main]
async fn main() {
    let client = Client::new();

    // Run your scheduled work
    let result = run_billing_reconciliation().await;

    // Report to CronPeek
    let error = result.err().map(|e| e.to_string());
    if let Err(ping_err) = cronpeek_ping(
        &client,
        "mon_billing_rust_001",
        error.as_deref(),
    ).await {
        eprintln!("CronPeek ping failed: {}", ping_err);
    }
}

async fn run_billing_reconciliation() -> Result<(), Box> {
    // ... your job logic
    Ok(())
}

Key details: the 5-second timeout prevents a CronPeek outage from blocking your service. Appending /fail to the URL triggers an immediate alert rather than waiting for a missed heartbeat window.

Using tokio-cron-scheduler with CronPeek Heartbeats

tokio-cron-scheduler is the most popular cron library in the Rust ecosystem. It runs on the tokio runtime and supports standard cron expressions. Here is how to wire up CronPeek monitoring for each job:

use tokio_cron_scheduler::{Job, JobScheduler};
use reqwest::Client;
use std::sync::Arc;

#[tokio::main]
async fn main() -> Result<(), Box> {
    let sched = JobScheduler::new().await?;
    let client = Arc::new(Client::new());

    // Run every 5 minutes
    let c = client.clone();
    sched.add(Job::new_async("0 */5 * * * *", move |_uuid, _lock| {
        let c = c.clone();
        Box::pin(async move {
            println!("Starting inventory sync...");

            let result = sync_inventory().await;

            // Report to CronPeek
            let error = result.err().map(|e| e.to_string());
            if let Err(e) = cronpeek_ping(
                &c,
                "mon_inventory_rust_001",
                error.as_deref(),
            ).await {
                eprintln!("CronPeek ping failed: {}", e);
            }
        })
    })?).await?;

    // Run daily at 2 AM
    let c = client.clone();
    sched.add(Job::new_async("0 0 2 * * *", move |_uuid, _lock| {
        let c = c.clone();
        Box::pin(async move {
            let result = generate_daily_reports().await;
            let error = result.err().map(|e| e.to_string());
            let _ = cronpeek_ping(
                &c,
                "mon_reports_rust_001",
                error.as_deref(),
            ).await;
        })
    })?).await?;

    sched.start().await?;

    // Block forever
    tokio::signal::ctrl_c().await?;
    Ok(())
}

async fn sync_inventory() -> Result<(), Box> {
    // ... your job logic
    Ok(())
}

async fn generate_daily_reports() -> Result<(), Box> {
    // ... your job logic
    Ok(())
}

Each job gets its own CronPeek monitor ID. This is important — when the inventory sync fails but the report generation keeps running, you need to know exactly which job stopped.

Monitoring Actix-web Background Tasks

Actix-web services commonly run background tasks alongside the HTTP server using tokio::spawn or actix_rt::spawn. The HTTP server passes health checks while the background task is silently dead. CronPeek catches this.

use actix_web::{web, App, HttpServer, HttpResponse};
use reqwest::Client;
use std::sync::Arc;
use tokio::time::{interval, Duration};

struct AppState {
    http_client: Client,
}

async fn health_check() -> HttpResponse {
    HttpResponse::Ok().body("ok")
}

/// Background task that runs every 10 minutes
async fn background_cleanup(state: Arc) {
    let mut ticker = interval(Duration::from_secs(600));

    loop {
        ticker.tick().await;

        let result = cleanup_expired_sessions().await;

        let error = result.err().map(|e| e.to_string());
        if let Err(e) = cronpeek_ping(
            &state.http_client,
            "mon_cleanup_actix_001",
            error.as_deref(),
        ).await {
            eprintln!("CronPeek ping failed: {}", e);
        }
    }
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let state = Arc::new(AppState {
        http_client: Client::new(),
    });

    // Spawn the background task
    let bg_state = state.clone();
    tokio::spawn(async move {
        background_cleanup(bg_state).await;
    });

    // Start the HTTP server
    HttpServer::new(move || {
        App::new()
            .route("/healthz", web::get().to(health_check))
    })
    .bind("0.0.0.0:8080")?
    .run()
    .await
}

async fn cleanup_expired_sessions() -> Result<(), Box> {
    // ... cleanup logic
    Ok(())
}

The critical point: if background_cleanup panics, the tokio::spawn task is dropped silently. The HTTP server at /healthz continues responding 200. Without CronPeek, you would not know the cleanup stopped until stale sessions pile up and users complain.

Axum Integration with Tower Middleware

For Axum services, you can create a tower middleware layer that wraps any background task runner with CronPeek heartbeats. This is useful when you have a shared task executor pattern:

use axum::{Router, routing::get, extract::State};
use reqwest::Client;
use std::sync::Arc;
use tokio::time::{interval, Duration};

#[derive(Clone)]
struct AppState {
    cronpeek_client: Arc,
}

/// Generic task runner that reports to CronPeek after each execution
async fn monitored_task(
    client: &Client,
    monitor_id: &str,
    task_name: &str,
    task: F,
) where
    F: FnOnce() -> Fut,
    Fut: std::future::Future>>,
{
    println!("[{}] Starting...", task_name);
    let result = task().await;

    match &result {
        Ok(_) => println!("[{}] Completed successfully", task_name),
        Err(e) => eprintln!("[{}] Failed: {}", task_name, e),
    }

    let error = result.err().map(|e| e.to_string());
    if let Err(e) = cronpeek_ping(client, monitor_id, error.as_deref()).await {
        eprintln!("[{}] CronPeek ping failed: {}", task_name, e);
    }
}

async fn start_background_tasks(state: AppState) {
    let client = state.cronpeek_client.clone();

    // Task 1: Cache warming every 15 minutes
    let c = client.clone();
    tokio::spawn(async move {
        let mut ticker = interval(Duration::from_secs(900));
        loop {
            ticker.tick().await;
            monitored_task(
                &c,
                "mon_cache_axum_001",
                "cache-warm",
                || warm_cache(),
            ).await;
        }
    });

    // Task 2: Metrics aggregation every hour
    let c = client.clone();
    tokio::spawn(async move {
        let mut ticker = interval(Duration::from_secs(3600));
        loop {
            ticker.tick().await;
            monitored_task(
                &c,
                "mon_metrics_axum_001",
                "metrics-agg",
                || aggregate_metrics(),
            ).await;
        }
    });
}

async fn health() -> &'static str { "ok" }

#[tokio::main]
async fn main() {
    let state = AppState {
        cronpeek_client: Arc::new(Client::new()),
    };

    // Start background tasks before the server
    start_background_tasks(state.clone()).await;

    let app = Router::new()
        .route("/healthz", get(health))
        .with_state(state);

    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
    axum::serve(listener, app).await.unwrap();
}

async fn warm_cache() -> Result<(), Box> { Ok(()) }
async fn aggregate_metrics() -> Result<(), Box> { Ok(()) }

The monitored_task wrapper standardizes logging and CronPeek reporting across all your background tasks. Add a new task by calling the same wrapper with a different monitor ID and closure.

Error Handling: Fire-and-Forget Pattern

Your CronPeek ping should never block or crash your service. The correct pattern is fire-and-forget: spawn the ping as a separate task with a timeout, log failures, and move on. CronPeek's grace period tolerates occasional missed pings from transient network issues.

use tokio::time::timeout;
use std::time::Duration;

/// Fire-and-forget CronPeek ping. Never blocks the caller.
fn ping_cronpeek_background(
    client: Client,
    monitor_id: String,
    error: Option,
) {
    tokio::spawn(async move {
        let result = timeout(
            Duration::from_secs(3),
            cronpeek_ping(&client, &monitor_id, error.as_deref()),
        ).await;

        match result {
            Ok(Ok(_)) => {} // success, nothing to log
            Ok(Err(e)) => eprintln!("CronPeek request error: {}", e),
            Err(_) => eprintln!("CronPeek ping timed out after 3s"),
        }
    });
}

Rules for production usage:

Docker/Kubernetes: Sidecar Pattern for Rust Services

If you cannot modify the Rust binary (third-party tool, compiled without CronPeek support), use a sidecar container that pings CronPeek after the main container exits:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: rust-etl-pipeline
spec:
  schedule: "0 */6 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
          - name: etl-job
            image: your-registry/rust-etl:latest
            command:
            - /bin/sh
            - -c
            - |
              /app/etl-pipeline && \
              curl -sf https://cronpeek.web.app/api/v1/ping/mon_etl_rust_001 || \
              curl -sf https://cronpeek.web.app/api/v1/ping/mon_etl_rust_001/fail

For long-running Rust services with internal schedulers (not Kubernetes CronJobs), the sidecar approach uses a shared volume to communicate status:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rust-worker
spec:
  template:
    spec:
      containers:
      - name: worker
        image: your-registry/rust-worker:latest
        volumeMounts:
        - name: heartbeat
          mountPath: /tmp/heartbeat
      - name: cronpeek-sidecar
        image: curlimages/curl:latest
        command:
        - /bin/sh
        - -c
        - |
          while true; do
            # Check if the Rust process wrote a heartbeat file
            if [ -f /tmp/heartbeat/last_run ]; then
              AGE=$(( $(date +%s) - $(stat -c %Y /tmp/heartbeat/last_run) ))
              if [ "$AGE" -lt 600 ]; then
                curl -sf https://cronpeek.web.app/api/v1/ping/mon_worker_001
              fi
            fi
            sleep 60
          done
        volumeMounts:
        - name: heartbeat
          mountPath: /tmp/heartbeat
      volumes:
      - name: heartbeat
        emptyDir: {}

The Rust service touches /tmp/heartbeat/last_run after each successful task run. The sidecar checks the file age and pings CronPeek if the file is fresh. This decouples monitoring from application code entirely.

Testing: Mock CronPeek Endpoint in Integration Tests

In tests, you do not want to hit the real CronPeek API. Use wiremock to stand up a mock server that captures pings:

[dev-dependencies]
wiremock = "0.6"
tokio = { version = "1", features = ["full", "test-util"] }
use wiremock::{MockServer, Mock, matchers, ResponseTemplate};

#[tokio::test]
async fn test_cronpeek_ping_on_success() {
    // Start a mock CronPeek server
    let mock_server = MockServer::start().await;

    Mock::given(matchers::method("GET"))
        .and(matchers::path("/api/v1/ping/mon_test_001"))
        .respond_with(ResponseTemplate::new(200))
        .expect(1)
        .mount(&mock_server)
        .await;

    // Override the base URL to point at the mock
    let client = reqwest::Client::new();
    let url = format!("{}/api/v1/ping/mon_test_001", mock_server.uri());

    let resp = client
        .get(&url)
        .timeout(std::time::Duration::from_secs(5))
        .send()
        .await
        .unwrap();

    assert_eq!(resp.status(), 200);
    // wiremock automatically verifies expect(1) on drop
}

#[tokio::test]
async fn test_cronpeek_ping_on_failure() {
    let mock_server = MockServer::start().await;

    Mock::given(matchers::method("GET"))
        .and(matchers::path("/api/v1/ping/mon_test_001/fail"))
        .respond_with(ResponseTemplate::new(200))
        .expect(1)
        .mount(&mock_server)
        .await;

    let client = reqwest::Client::new();
    let url = format!(
        "{}/api/v1/ping/mon_test_001/fail",
        mock_server.uri()
    );

    let resp = client.get(&url).send().await.unwrap();
    assert_eq!(resp.status(), 200);
}

#[tokio::test]
async fn test_cronpeek_unreachable_does_not_panic() {
    // Point at a port that is not listening
    let client = reqwest::Client::new();
    let result = client
        .get("http://127.0.0.1:1/api/v1/ping/mon_test_001")
        .timeout(std::time::Duration::from_secs(1))
        .send()
        .await;

    // Should be an error, not a panic
    assert!(result.is_err());
}

The third test verifies the most important property: your service does not panic when CronPeek is unreachable. This is a critical integration test for any production service that depends on external monitoring.

Best Practices for Rust Cron Monitoring

Monitor your Rust cron jobs in 60 seconds

Free tier includes 5 monitors. No credit card required. Set up a dead man's switch for your tokio-cron-scheduler, Actix-web, or Axum background tasks today.

Monitor 5 Cron Jobs Free

FAQ

How do I monitor a Rust cron job for silent failures?

After your Rust scheduled task completes, send an HTTP GET request to your CronPeek ping URL using reqwest. If CronPeek stops receiving pings within the expected interval, it triggers an alert via email, Slack, or webhook. Use the fire-and-forget pattern with tokio::spawn to avoid blocking your main task.

Does CronPeek work with tokio-cron-scheduler?

Yes. Inside your Job::new_async closure, add a reqwest call to your CronPeek ping URL after your task logic completes. The ping is a single HTTP GET that takes milliseconds. Use tokio::spawn if you want the ping to be fully non-blocking relative to the scheduler.

Can I use CronPeek with Actix-web background tasks?

Yes. Actix-web services commonly run background tasks via tokio::spawn. After each background task completes, send a heartbeat ping to CronPeek. The HTTP server continues responding to health checks even if the background task panics, so CronPeek's dead man's switch catches failures that health checks miss entirely.

How do I handle CronPeek being unreachable from my Rust service?

Use a fire-and-forget pattern: spawn the ping in a separate tokio task with a short timeout (3–5 seconds). If the request fails, log the error but do not retry or block. CronPeek's grace period handles transient network issues, so a single missed ping will not trigger a false alert.

How much does Rust cron job monitoring cost with CronPeek?

CronPeek's free tier includes 5 monitors with no credit card required. The Starter plan at $9/month covers 50 monitors, and Pro at $29/month gives unlimited monitors. Compared to Cronitor at roughly $2 per monitor per month, CronPeek is over 10x cheaper for teams running 50+ scheduled tasks.

The Peek Suite

CronPeek is part of a family of developer monitoring tools: