When Data Stops Flowing Silently
Webhooks from third-party services (Stripe, Shopify, GitHub) and ETL pipelines (Airbyte, dbt, custom scripts) are the arteries of modern SaaS applications. Unlike user-facing pages, these data flows fail silently. A Stripe webhook endpoint that returns 200 but stops being called, or a nightly ETL job that quietly crashes, can go undetected for days — causing stale dashboards, missed invoices, and broken analytics.
Heartbeat Monitoring for Webhook Receivers
If your application expects to receive webhooks at a predictable frequency, heartbeat monitoring can verify the flow continues. Add a ping to FourSight inside your webhook handler so that each successful processing cycle confirms the pipeline is alive.
High-Frequency Webhooks
For webhooks that fire many times per hour (e.g. order events), use a fixed interval like 5 or 10 minutes. Your handler pings FourSight on each successful batch or on a periodic timer within the handler process.
Daily or Weekly Webhooks
For less frequent events (e.g. a weekly subscription renewal batch from a payment provider), use cron-expression mode to match the expected schedule with a generous grace period.
// Inside your webhook handler (Node.js / Express)
app.post("/webhooks/stripe", async (req, res) => {
try {
await processStripeEvent(req.body);
// Confirm the pipeline is alive
await fetch("https://ping.foursight.cloud/hb/<YOUR_TOKEN>");
res.sendStatus(200);
} catch (err) {
console.error("Webhook processing failed:", err);
res.sendStatus(500);
}
});Monitoring a Commercial SaaS?
FourSight includes 25 commercial-safe monitors with multi-region validation.
Start Monitoring FreeMonitoring ETL & Data Pipelines
ETL jobs — whether built with dbt, Airflow, custom Python, or simple SQL scripts — should ping FourSight after a successful run. This catches not just crashes but also silent hangs where the process is stuck waiting on a locked table or an unresponsive API.
dbt & Airflow
Add a curl ping as the final task in your DAG or a post-hook in dbt. The ping only fires if all upstream tasks succeed.
Database-to-Warehouse Sync
After your nightly sync completes, ping FourSight. Pair this with a row-count validation step so you catch both 'didn't run' and 'ran but imported zero rows' failures.
# At the end of your ETL script
import requests
def run_etl():
extract()
transform()
load()
# Signal success
requests.get("https://ping.foursight.cloud/hb/<YOUR_TOKEN>", timeout=10)
run_etl()Designing Grace Periods for Data Pipelines
Data pipelines often have variable execution times. A nightly warehouse refresh might take 12 minutes on Monday and 45 minutes on month-end. Set your grace period to cover the worst-case runtime, and use the miss threshold to require 2 consecutive misses before alerting — this prevents false alarms during occasional slow runs while still catching genuine failures promptly.
Combining Heartbeats with HTTP Monitors
For critical data flows, use both: an HTTP monitor on your webhook endpoint to verify it's reachable, and a heartbeat monitor to verify it's actually being called and processing successfully. The HTTP check catches infrastructure failures; the heartbeat catches logical failures where the endpoint is up but no events are arriving.