I Run 28 Cron Jobs on a Free Database. Here's How I Keep Them Alive.
I have 28 scheduled jobs running on Supabase's pg_cron extension. They sync data from 10+ APIs, send churn alerts, refresh analytics tables, trigger email reminders, and keep 90 database tables current. All on the free tier. Here's how I organize them without going insane.
Why pg_cron Instead of a Task Queue
When people need scheduled tasks, they reach for BullMQ, Celery, AWS Lambda + EventBridge, or some other task queue system. That means another service to deploy, monitor, and pay for.
pg_cron is different. It runs inside your Postgres database. No separate service. No extra infrastructure. You schedule a SQL statement or an HTTP call with a cron expression, and Postgres runs it. If you're already on Supabase, you already have it — just enable the extension.
-- Schedule an Edge Function to run daily at 7 AM ET
SELECT cron.schedule(
'daily-sync',
'0 11 * * *', -- 11:00 UTC = 7:00 AM ET
$$SELECT net.http_post(
url := 'https://your-project.supabase.co/functions/v1/sync',
headers := '{"Authorization": "Bearer YOUR_KEY"}'::jsonb
)$$
);
That's it. No deployment. No YAML. No infrastructure. One SQL statement and you have a cron job.
My 28 Jobs, Categorized
The mistake people make is treating cron jobs as a flat list. By the time you have 10+, you can't remember what runs when or why. I organize mine into 5 categories:
1. Data Ingest (hourly)
API syncs that pull data from external platforms into the warehouse. These run hourly because dashboards need near-real-time data, but full refreshes are too expensive to run every hour.
- Call recording sync (2 platforms, incremental only)
- CRM contact sync
- Marketplace metrics refresh
2. Full Refresh (weekly)
Complete DELETE + INSERT refreshes that catch anything incremental missed: retroactive edits, backfilled data, deleted records. These run on Monday mornings when API rate limits are less contended.
- Call recording full refresh (2 platforms, staggered 30 min apart)
- CRM full refresh
- Content platform full refresh
3. Computed Tables (daily)
Postgres functions that recompute analytics tables from raw data. These run after ingest jobs complete.
- Product impact metrics (recomputes rollup from billing + usage data)
- Tracking data aggregation (rolls up events into daily summaries)
- Ticket triage (AI classification of new tickets)
- Pipeline article refresh (snapshots current CMS state)
4. Alerts & Notifications (daily)
Jobs that check conditions and fire alerts when thresholds are met.
- Churn detection (posts to Slack when signals trigger)
- Data quality check (flags tables that haven't updated in 48 hours)
- Monthly email reminders (first of the month)
5. Maintenance (weekly)
Cleanup and optimization jobs.
- Stale session purge
- Duplicate detection
- Embedding refresh for new/changed records
The Scheduling Gotchas
Stagger your jobs
If 5 jobs are scheduled at exactly 0 11 * * *, they all fire at the same second and compete for database connections. Stagger by 5-15 minutes:
-- DON'T: Everything at :00
'0 11 * * *' -- sync-a
'0 11 * * *' -- sync-b
'0 11 * * *' -- sync-c
-- DO: Stagger
'0 11 * * *' -- sync-a (11:00)
'5 11 * * *' -- sync-b (11:05)
'10 11 * * *' -- sync-c (11:10)
Time zones will bite you
pg_cron runs in UTC. Always. If you want 7 AM Eastern, that's 0 11 * * * during EST and 0 10 * * * during EDT. I keep a comment table mapping each job to its intended local time, so I don't have to do mental math every time I check the schedule.
Edge Function timeouts
pg_cron calls your Edge Function via HTTP. If the function takes longer than the HTTP timeout (default 60s on some plans, up to 300s on others), the cron job "succeeds" (the HTTP call was made) but the function may not have finished. For long-running syncs, I use a pattern:
- Edge Function receives the cron trigger
- Immediately returns 200 ("accepted")
- Continues processing in the background
- Writes a completion timestamp to a status table
Then a separate health check job verifies that completion timestamps are recent.
The "did it actually run?" problem
pg_cron doesn't have a built-in dashboard. You can query cron.job_run_details to see recent runs, but the retention is limited. I log every execution to a cron_log table with the job name, start time, status, and row count. This is the single most useful thing I've built for cron management:
CREATE TABLE cron_log (
id serial PRIMARY KEY,
job_name text NOT NULL,
started_at timestamptz DEFAULT now(),
status text DEFAULT 'running',
rows_affected int,
error text,
finished_at timestamptz
);
Where This Applies Beyond My Setup
This pattern works for any business running on Supabase (or any Postgres with pg_cron):
- E-commerce: Hourly inventory sync from suppliers, daily price comparisons, weekly sales report emails, abandoned cart reminders
- SaaS: Usage metric rollups, trial expiration alerts, weekly digest emails, CRM sync, feature flag cleanup
- Healthcare: Appointment reminder triggers, prescription refill checks, daily compliance report generation
- Real estate: Listing sync from MLS, daily market price updates, lead follow-up reminders, monthly portfolio reports
- Agencies: Client report generation, social media metric sync, invoice reminders, project health checks
The Rules I Follow
- Name every job descriptively.
'sync-crm-incremental'not'job_7'. You'll thank yourself at 2 AM. - Stagger by 5-15 minutes. Never schedule two jobs at the same time.
- Log everything. A
cron_logtable is cheap and saves hours of debugging. - Separate incremental from full refresh. Hourly incremental for speed, weekly full for integrity.
- Run computed tables after ingest. Schedule them 30+ minutes after the data arrives.
- Comment your UTC offsets. Future you won't remember that
0 16 * * 5means "Friday noon Eastern." - Build a health check. One job that runs daily and verifies all other jobs completed recently. Alert on staleness, not on failure — silent failures are the dangerous ones.
"How do you keep track of all those jobs?"
"A table and a naming convention. That's literally it."
(Managing 28 jobs means remembering what each one does, when it runs, and what to check when it breaks. I capture all of this in Brain Kit — my MCP memory server — so I can ask "what does the Monday morning sync do?" and get the answer from any AI tool.)
28 cron jobs sounds like a lot. It's not. It's 5 categories, a naming convention, a log table, and a health check. The entire system runs on a free Supabase instance, costs $0/month, and I check on it maybe once a week. pg_cron is the most underrated feature in Postgres.
Brain Kit ($29)
Brain Kit runs on the same Supabase + Edge Function stack. One-command deploy, semantic search, 5 capture channels.
Get Brain Kit — $29Like what I build? Check out the shop — deploy-ready kits starting at $14.
More from the build log
Serverless architecture, AI pipelines, and systems that run themselves.
Read more posts