Monitor Ruby Cron Jobs with CronPeek API
Ruby applications rely heavily on scheduled background work. Rails apps use the Whenever gem to manage crontab entries from a clean Ruby DSL. Sidekiq processes millions of recurring jobs with sidekiq-cron and sidekiq-scheduler. The Clockwork gem runs an in-process scheduler for lightweight tasks. Rake tasks handle database maintenance, cache warming, and report generation. When any of these stop running, the failure is silent. No exception page. No error in your logs. Just a job that quietly stopped doing its work.
Dead man's switch monitoring catches these invisible failures. Your Ruby cron job pings an external endpoint after every successful run. If the ping stops arriving, you get an alert. This guide shows you how to wire that up with Net::HTTP, Faraday, the Whenever gem, Sidekiq scheduled jobs, Clockwork, and Rake tasks using CronPeek.
Why Ruby Cron Jobs Fail Silently
Ruby's flexibility and dynamic nature make cron failures particularly sneaky. A method that worked yesterday can silently break after a gem update. Common silent failure modes in Ruby cron jobs:
- Gem version conflicts after bundle update — a dependency bump introduces an incompatible API change. The script raises a
NoMethodErrororLoadErrorat boot, exits with code 1, and cron discards the output. - Memory bloat in long-running processes — Ruby's garbage collector can struggle with large object allocations. A Sidekiq worker or Clockwork process slowly leaks memory until the OOM killer terminates it without warning.
- Database connection pool exhaustion — ActiveRecord's connection pool fills up when concurrent jobs exceed
poolindatabase.yml. New jobs hang waiting for a connection and eventually time out silently. - Whenever gem crontab not updated — you added a new task to
schedule.rbbut forgot to runwhenever --update-crontabduring deployment. The old crontab keeps running stale entries. - Sidekiq process crashes — the Sidekiq server process dies due to a segfault in a native extension or a SIGKILL from systemd. Scheduled jobs stop being picked up, but the Rails app continues serving requests normally.
- Bundler environment mismatch — the crontab runs Ruby without the correct
BUNDLE_GEMFILEorGEM_HOME, causingrequireto fail before your code even loads.
None of these trigger a 500 error or an exception tracker alert. You need a monitor that detects the absence of a signal.
How CronPeek Works: The Dead Man's Switch Pattern
A dead man's switch flips the monitoring model. Instead of checking whether your server is up, the server proves it's working by sending a heartbeat to an external monitor. The monitor expects a ping every N minutes. If a ping is late, it fires an alert.
This is the only reliable way to monitor scheduled tasks because the failure mode is silence. You can't poll for something that didn't happen.
With CronPeek, the flow is:
- Create a monitor in CronPeek with an expected interval (e.g., every 60 minutes)
- Get your unique ping URL:
https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/{monitor_id} - Hit that URL at the end of each successful job run
- If the ping is late, CronPeek alerts you via email, Slack, or webhook
To report a failure explicitly, append /fail to the ping URL. This triggers an immediate alert rather than waiting for the heartbeat to expire.
Net::HTTP Example: The Simplest Ping
Ruby's standard library includes Net::HTTP, so you can ping CronPeek without adding any gems. This is the zero-dependency approach for standalone scripts and Rake tasks.
# lib/cronpeek.rb — Reusable CronPeek helper
require 'net/http'
require 'uri'
require 'timeout'
module CronPeek
BASE_URL = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping'
def self.ping(monitor_id, failed: false)
url = "#{BASE_URL}/#{monitor_id}"
url += '/fail' if failed
uri = URI.parse(url)
Timeout.timeout(5) do
response = Net::HTTP.get_response(uri)
unless response.is_a?(Net::HTTPSuccess)
warn "CronPeek ping unexpected status: #{response.code}"
return false
end
end
true
rescue Timeout::Error
warn 'CronPeek ping timed out after 5 seconds'
false
rescue StandardError => e
warn "CronPeek ping failed: #{e.message}"
false
end
end
Use it at the end of any Ruby script:
#!/usr/bin/env ruby
require_relative 'lib/cronpeek'
begin
# Your cron job logic
orders = process_unfulfilled_orders
puts "Processed #{orders.count} orders"
# Report success to CronPeek
CronPeek.ping('mon_orders_001')
rescue StandardError => e
STDERR.puts "Order processing failed: #{e.message}"
# Report failure — triggers immediate alert
CronPeek.ping('mon_orders_001', failed: true)
exit 1
end
Key details: the 5-second Timeout.timeout prevents a CronPeek outage from blocking your job. The StandardError rescue covers network failures, DNS resolution errors, and SSL handshake problems without catching SystemExit or SignalException. The method returns false on failure so callers can log but not crash.
Rails + Whenever Gem Integration
The Whenever gem provides a clean Ruby DSL for writing and deploying cron jobs. It translates your config/schedule.rb into crontab entries. Here's how to add CronPeek monitoring to Whenever-managed schedules.
Basic Whenever Schedule with CronPeek
The simplest approach is to chain a curl ping in the crontab entry itself. Whenever supports custom job types for this:
# config/schedule.rb
# Define a custom job type that pings CronPeek after success
job_type :monitored_runner,
"cd :path && bin/rails runner -e :environment ':task' " \
"&& curl -sf https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/:monitor_id " \
"|| curl -sf https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/:monitor_id/fail"
every 1.hour do
monitored_runner 'InvoiceProcessor.run',
monitor_id: 'mon_rb_invoices'
end
every 1.day, at: '3:00 am' do
monitored_runner 'DatabaseCleanup.purge_stale_records',
monitor_id: 'mon_rb_cleanup'
end
every :monday, at: '9:00 am' do
monitored_runner 'WeeklyReportGenerator.generate',
monitor_id: 'mon_rb_weekly_report'
end
Monitoring Inside Rails Runner Scripts
For more control over error reporting, add monitoring directly to the Ruby class that Whenever invokes:
# app/services/invoice_processor.rb
class InvoiceProcessor
CRONPEEK_MONITOR = 'mon_rb_invoices'
def self.run
processor = new
processor.process_pending_invoices
CronPeek.ping(CRONPEEK_MONITOR)
rescue StandardError => e
Rails.logger.error("Invoice processing failed: #{e.message}")
CronPeek.ping(CRONPEEK_MONITOR, failed: true)
raise
end
private
def process_pending_invoices
Invoice.pending.find_each do |invoice|
invoice.charge!
invoice.update!(status: :paid, charged_at: Time.current)
end
end
end
Reusable Rails Concern for Monitored Jobs
If you have many scheduled tasks, extract the ping logic into a concern:
# app/concerns/cronpeek_monitored.rb
module CronpeekMonitored
extend ActiveSupport::Concern
class_methods do
def cronpeek_monitor(monitor_id)
@cronpeek_monitor_id = monitor_id
end
def cronpeek_monitor_id
@cronpeek_monitor_id
end
end
private
def ping_cronpeek(failed: false)
monitor_id = self.class.cronpeek_monitor_id
return unless monitor_id
CronPeek.ping(monitor_id, failed: failed)
end
end
Then use it in any service class:
# app/services/database_cleanup.rb
class DatabaseCleanup
include CronpeekMonitored
cronpeek_monitor 'mon_rb_cleanup'
def self.purge_stale_records
job = new
job.purge
job.send(:ping_cronpeek)
rescue StandardError => e
new.send(:ping_cronpeek, failed: true)
raise
end
def purge
ActiveRecord::Base.transaction do
Session.where('updated_at < ?', 30.days.ago).delete_all
TempUpload.where('created_at < ?', 7.days.ago).destroy_all
AuditLog.where('created_at < ?', 90.days.ago).in_batches.delete_all
end
end
end
Sidekiq Scheduled Jobs
Sidekiq is the most popular background job processor in the Ruby ecosystem. When combined with sidekiq-cron or sidekiq-scheduler, it handles recurring scheduled work. The danger is that the Sidekiq process can silently crash or the Redis connection can drop, and your scheduled jobs simply stop running.
Monitoring a Sidekiq Worker with sidekiq-cron
# config/initializers/sidekiq_cron.rb
Sidekiq::Cron::Job.create(
name: 'Order sync - every 10 minutes',
cron: '*/10 * * * *',
class: 'OrderSyncWorker'
)
# app/workers/order_sync_worker.rb
class OrderSyncWorker
include Sidekiq::Worker
CRONPEEK_MONITOR = 'mon_sk_order_sync'
def perform
synced = sync_orders_from_api
logger.info "Synced #{synced} orders from external API"
CronPeek.ping(CRONPEEK_MONITOR)
rescue StandardError => e
logger.error "Order sync failed: #{e.message}"
CronPeek.ping(CRONPEEK_MONITOR, failed: true)
raise # Let Sidekiq handle retry logic
end
private
def sync_orders_from_api
orders = ExternalApi::Orders.fetch_pending
orders.each { |order| Order.create_from_external!(order) }
orders.size
end
end
Sidekiq Middleware for Automatic Monitoring
If you want to monitor all Sidekiq workers without modifying each one, use server middleware:
# config/initializers/sidekiq.rb
class CronPeekMiddleware
def call(worker, job, queue)
yield
monitor_id = worker.class.try(:cronpeek_monitor_id)
CronPeek.ping(monitor_id) if monitor_id
rescue StandardError => e
monitor_id = worker.class.try(:cronpeek_monitor_id)
CronPeek.ping(monitor_id, failed: true) if monitor_id
raise
end
end
Sidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.add CronPeekMiddleware
end
end
Then tag any worker with a monitor ID:
class ReportWorker
include Sidekiq::Worker
def self.cronpeek_monitor_id
'mon_sk_reports'
end
def perform
generate_daily_report
end
end
Clockwork Gem Integration
The Clockwork gem runs an in-process scheduler as a standalone Ruby process. Unlike crontab-based solutions, it runs continuously in its own process, making it easy to deploy on platforms like Heroku. But if the Clockwork process dies, every scheduled task stops.
# clock.rb
require 'clockwork'
require_relative 'lib/cronpeek'
module Clockwork
configure do |config|
config[:logger] = Logger.new(STDOUT)
config[:tz] = 'UTC'
end
every(10.minutes, 'sync.inventory') do
begin
InventorySync.run
CronPeek.ping('mon_cw_inventory')
rescue StandardError => e
Clockwork.manager.config[:logger].error("Inventory sync failed: #{e.message}")
CronPeek.ping('mon_cw_inventory', failed: true)
end
end
every(1.hour, 'reports.hourly') do
begin
HourlyMetrics.generate
CronPeek.ping('mon_cw_metrics')
rescue StandardError => e
Clockwork.manager.config[:logger].error("Metrics generation failed: #{e.message}")
CronPeek.ping('mon_cw_metrics', failed: true)
end
end
every(1.day, 'cleanup.expired', at: '04:00') do
begin
ExpiredTokenCleanup.run
CronPeek.ping('mon_cw_cleanup')
rescue StandardError => e
Clockwork.manager.config[:logger].error("Cleanup failed: #{e.message}")
CronPeek.ping('mon_cw_cleanup', failed: true)
end
end
end
Start Clockwork with bundle exec clockwork clock.rb. Because it runs as a long-lived process, you should also monitor the process itself with systemd or a process supervisor. CronPeek's heartbeat model catches both individual task failures and process-level crashes — if the Clockwork process dies, all heartbeats stop, and you get alerts for every task.
Rake Task Monitoring
Rake tasks are commonly used for database maintenance, data migrations, and batch processing. They're often triggered from crontab or CI/CD pipelines. Here's how to add CronPeek monitoring to Rake tasks.
# lib/tasks/maintenance.rake
require_relative '../cronpeek'
namespace :maintenance do
desc 'Purge expired sessions and temporary records'
task purge_expired: :environment do
puts "Starting expired record purge..."
expired_sessions = Session.where('expires_at < ?', Time.current).delete_all
expired_tokens = ApiToken.where('expires_at < ?', Time.current).delete_all
stale_uploads = TempUpload.where('created_at < ?', 24.hours.ago).destroy_all
puts "Purged #{expired_sessions} sessions, #{expired_tokens} tokens, #{stale_uploads.size} uploads"
CronPeek.ping('mon_rake_purge')
rescue StandardError => e
STDERR.puts "Purge failed: #{e.message}"
CronPeek.ping('mon_rake_purge', failed: true)
exit 1
end
desc 'Rebuild search indexes'
task rebuild_search_index: :environment do
puts "Rebuilding search indexes..."
Product.find_in_batches(batch_size: 500) do |batch|
SearchIndex.bulk_update(batch)
end
puts "Search index rebuild complete"
CronPeek.ping('mon_rake_search_index')
rescue StandardError => e
STDERR.puts "Search index rebuild failed: #{e.message}"
CronPeek.ping('mon_rake_search_index', failed: true)
exit 1
end
end
Trigger these from crontab:
# Purge expired records every 6 hours
0 */6 * * * cd /var/www/app && RAILS_ENV=production bundle exec rake maintenance:purge_expired
# Rebuild search indexes nightly at 2 AM
0 2 * * * cd /var/www/app && RAILS_ENV=production bundle exec rake maintenance:rebuild_search_index
Error Handling with Faraday
For production Rails applications, Faraday is the preferred HTTP client. It provides configurable middleware, connection pooling, automatic retries, and better timeout handling than Net::HTTP.
# lib/cronpeek_faraday.rb
require 'faraday'
module CronPeek
BASE_URL = 'https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping'
class Client
def initialize
@conn = Faraday.new do |f|
f.options.timeout = 5
f.options.open_timeout = 3
f.headers['User-Agent'] = 'CronPeek-Ruby-Faraday/1.0'
f.adapter Faraday.default_adapter
end
end
def ping(monitor_id)
response = @conn.get("#{BASE_URL}/#{monitor_id}")
unless response.success?
warn "CronPeek ping unexpected status: #{response.status}"
return false
end
true
rescue Faraday::TimeoutError
warn 'CronPeek ping timed out'
false
rescue Faraday::ConnectionFailed => e
warn "CronPeek connection failed: #{e.message}"
false
rescue Faraday::Error => e
warn "CronPeek ping failed: #{e.message}"
false
end
def fail(monitor_id)
@conn.get("#{BASE_URL}/#{monitor_id}/fail")
rescue Faraday::Error => e
warn "CronPeek fail ping failed: #{e.message}"
end
end
def self.client
@client ||= Client.new
end
def self.ping(monitor_id, failed: false)
if failed
client.fail(monitor_id)
else
client.ping(monitor_id)
end
end
end
Faraday's explicit exception hierarchy makes error handling precise. Faraday::TimeoutError covers both connection and read timeouts. Faraday::ConnectionFailed catches DNS failures, refused connections, and SSL errors. The singleton client method reuses the Faraday connection across pings for better performance.
Faraday with Retry Middleware
For critical jobs where you want the ping to survive transient network issues, add retry middleware:
@conn = Faraday.new do |f|
f.request :retry, max: 2, interval: 0.5,
exceptions: [Faraday::TimeoutError, Faraday::ConnectionFailed]
f.options.timeout = 5
f.options.open_timeout = 3
f.adapter Faraday.default_adapter
end
This retries up to 2 times with a 0.5-second delay between attempts. The total worst-case time is 16 seconds (5s timeout + 0.5s wait + 5s retry + 0.5s wait + 5s retry), so only use this for jobs where the ping is critical and the extra latency is acceptable.
Timeout and Error Handling Best Practices
The monitoring layer should never interfere with the job it's monitoring. Here are the rules that prevent CronPeek from becoming a liability in Ruby applications:
- Hard timeout of 5 seconds — wrap every ping in a
Timeout.timeout(5)or use Faraday's built-in timeout options. If CronPeek is unreachable, your script moves on without blocking. - Never raise on ping failure — rescue all exceptions from the ping call and log them with
warnorRails.logger. A CronPeek outage should not cascade into a cron job failure. - Report failures explicitly — don't just skip the ping on error. Hit the
/failendpoint so you get an immediate alert instead of waiting for the heartbeat to expire. Two alert paths are better than one. - One monitor per job — don't share a monitor ID across different scheduled tasks. You need to know which job failed, not just that something failed.
- Set grace periods generously — if your job runs every 5 minutes but sometimes takes 90 seconds, set the expected interval to 8 minutes. GC pauses, database latency, and network jitter are real.
- Ping at the end, not the beginning — a ping at the start of the job only proves the job started. A ping at the end proves it completed. Always ping after the critical work is done.
- Re-raise after reporting failure — when using Sidekiq, always
raiseafter pinging the fail endpoint. This lets Sidekiq's retry mechanism attempt the job again. CronPeek alerts you immediately; Sidekiq retries the work. - Watch for Bundler issues in crontab — crontab doesn't source your shell profile. Always use
bundle execand setRAILS_ENVexplicitly in your crontab entries.
Quick Reference: Crontab Entry with CronPeek
For the simplest possible integration, you can chain a curl ping directly in your crontab entry without modifying any Ruby code:
# Process orders every 10 minutes, ping CronPeek on success or failure
*/10 * * * * cd /var/www/app && RAILS_ENV=production bundle exec rake orders:process \
&& curl -sf https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_rb_orders \
|| curl -sf https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_rb_orders/fail
# Whenever-managed schedule with monitoring
* * * * * /bin/bash -l -c 'cd /var/www/app && bundle exec rails runner -e production "InvoiceProcessor.run"' \
&& curl -sf https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_rb_invoices
# Sidekiq process health check (ping every minute to prove Sidekiq is alive)
* * * * * curl -sf http://localhost:3000/sidekiq/stats && \
curl -sf https://us-central1-todd-agent-prod.cloudfunctions.net/cronpeekApi/ping/mon_sidekiq_alive
This approach works for any Ruby script without touching application code. The -sf curl flags suppress output and fail silently, keeping your cron logs clean.
Monitor your Ruby cron jobs in 60 seconds
Free tier includes 5 monitors. No credit card required. Set up a dead man's switch for your Rails, Sidekiq, or Clockwork scheduled jobs today.
Monitor 5 Cron Jobs FreeFAQ
How do I monitor a Ruby cron job for silent failures?
After your Ruby cron script completes, send an HTTP GET request to your CronPeek ping URL using Net::HTTP or Faraday. If CronPeek stops receiving pings within the expected interval, it triggers an alert via email, Slack, or webhook. This catches silent failures like unhandled exceptions, memory bloat, and scripts that exit early without completing their work.
How do I monitor Rails Whenever gem schedules with CronPeek?
The Whenever gem generates crontab entries from a Ruby DSL in config/schedule.rb. You can define a custom job_type that chains a curl ping after your task, or add CronPeek pings directly inside your Rails runner scripts. For maximum control, use a reusable concern that pings CronPeek on success or failure, keeping your schedule.rb clean.
Can I monitor Sidekiq scheduled and recurring jobs with CronPeek?
Yes. For Sidekiq workers triggered by sidekiq-cron or sidekiq-scheduler, add a CronPeek ping at the end of your perform method. On success, hit the standard ping endpoint. On failure, hit the /fail endpoint for an immediate alert, then re-raise the exception so Sidekiq can retry. You can also use Sidekiq server middleware to automatically monitor all tagged workers.
What is a dead man's switch for Ruby cron jobs?
A dead man's switch is a monitoring pattern where your Ruby scheduled task sends a heartbeat ping to an external service like CronPeek after each successful run. If the ping stops arriving within the configured grace period, the service assumes the job has failed and sends an alert. Unlike uptime monitoring, it detects when something stops happening — the exact failure mode of cron jobs and scheduled tasks.
How much does Ruby cron job monitoring cost with CronPeek?
CronPeek's free tier includes 5 monitors with no credit card required. The Starter plan at $9/month covers 50 monitors, and Pro at $29/month gives unlimited monitors. Compared to Cronitor at roughly $2 per monitor per month, CronPeek is over 10x cheaper for teams with 50+ scheduled tasks.
Related Posts
The Peek Suite
CronPeek is part of a family of developer monitoring tools: