Webhooks

Updated · By the Pigeon Perch team

Get notified at your own HTTPS endpoint the moment something happens in Pigeon Perch. Webhooks are how you keep an external database, CRM, or downstream system in sync without polling our API.

When to use webhooks

Use webhooks when you need to react to things that happen inside Pigeon Perch without polling the API. The most common use case is keeping subscribe state synchronized across systems: when a recipient clicks “unsubscribe” in an email footer, your own application learns about it within about a minute (see Delivery behavior) and stops sending to that contact — no nightly batch sync, no risk of an embarrassing send to someone who already opted out.

Other practical applications:

  • Suppress bounced and complained contacts in your transactional email system.
  • Push form submissions into your CRM as new leads.
  • Score engagement in a data warehouse off email opens.
  • Trigger workflows in Zapier, n8n, Make, or your own automation tooling.

Setting up a webhook

Head to Settings → Integrations → Webhooks in the dashboard and click Add webhook. You'll need:

  • An HTTPS URL that accepts POST requests with a JSON body. We don't allow plain HTTP.
  • One or more event types (see the catalog below).
  • An optional description — helpful when you have several webhooks pointing at different downstream systems.

After you save, we show your signing secret exactly once. Copy it immediately and store it somewhere safe (1Password, AWS Secrets Manager, your own KMS, etc.) — we don't keep a recoverable copy. If you lose it, delete the webhook and create a new one.

Event catalog

EventWhen it fires
contact.createdA new contact is added to your organization (any source).
contact.updatedA mutable field on a contact changes.
contact.unsubscribedA contact unsubscribes — via email link, preferences page, or your own API call.
contact.resubscribedA previously unsubscribed contact opts back in.
contact.bouncedA hard bounce was recorded for this contact. Soft bounces are not delivered.
contact.complainedA spam complaint was reported via the email provider feedback loop.
email.openedThe first open of an email by a recipient (within 24 hours of send).
form.submittedA tracked website form was submitted.

Payload format

Every webhook is a POST with a JSON body. Because Pigeon Perch batches events (see Delivery behavior), the body is always a batch wrapper containing one or more events under events[]. Your receiver should iterate the array unconditionally — even a single event arrives wrapped in a batch of size one.

{
  "id": "batch_2c4f8a1d3b6e9a0c1f2d3e4b5a6c7d8e",
  "type": "batch",
  "createdAt": "2026-04-20T18:24:11.000Z",
  "organizationId": "550e8400-e29b-41d4-a716-446655440000",
  "eventCount": 2,
  "events": [
    {
      "id": "evt_abc123...",
      "type": "contact.updated",
      "createdAt": "2026-04-20T18:23:14.120Z",
      "data": {
        "contact": {
          "id": "0c8e1f3a-2b5c-4d6e-9f0a-1b2c3d4e5f60",
          "email": "user@example.com",
          "firstName": "Jane",
          "lastName": "Doe",
          "externalId": "their-app-id-123"
        }
      }
    },
    {
      "id": "evt_def456...",
      "type": "contact.unsubscribed",
      "createdAt": "2026-04-20T18:23:47.801Z",
      "data": {
        "contact": {
          "id": "0c8e1f3a-2b5c-4d6e-9f0a-1b2c3d4e5f60",
          "email": "user@example.com",
          "status": "unsubscribed",
          "unsubscribedAt": "2026-04-20T18:23:47.801Z",
          "suppressionGroupId": null
        }
      }
    }
  ]
}

Top-level fields

FieldTypeDescription
idstringUnique per batch delivery, prefixed batch_. A retry of a failed delivery ships the same id, so use it for idempotency on your side.
typestringAlways "batch" in v1. Reserved for future non-batch deliveries.
createdAtISO 8601 timestampThe moment the window closed and the batch was assembled.
organizationIdUUIDYour Pigeon Perch organization ID. Useful if you route multiple orgs to one endpoint.
eventCountnumberLength of events[]. Convenience field for logging and metrics.
eventsarrayThe individual events, sorted oldest-first by createdAt.

Individual event fields

FieldTypeDescription
idstringUnique per logical event, prefixed evt_. Stable across retries. Use this per-event for dedup.
typestringOne of the event types from the catalog above.
createdAtISO 8601 timestampThe moment the underlying change happened in Pigeon Perch.
dataobjectThe event payload. Shape depends on type — for contact.* events it contains a contact object that mirrors GET /api/v1/contacts/{id}.

You don't need to make a follow-up API call — the payload is authoritative.

Request headers

HeaderValue
X-Pigeon-SignatureHMAC signature of the raw body — see Verifying signatures.
X-Pigeon-Event-IdThe batch id. Use for idempotency.
X-Pigeon-Event-TypeAlways batch.
X-Pigeon-Event-CountNumber of events in the batch.

Delivery behavior

Pigeon Perch batches events in 60-second windows per subscription. When the first event fires, a window opens; every event produced for that subscription during the next 60 seconds joins the same batch. When the window closes, all collected events are delivered as a single POST. One quiet event → one POST a minute later. A flurry of 1,000 events → still one POST.

Each delivery attempt is given 10 seconds to return a 2xx response. On failure we retry up to 3 times total with exponential backoff starting at 5 seconds. A retry ships the same batch contents with the same batch id, so you can dedupe safely on your side.

Deliveries are at-least-once. If a retry succeeds after an earlier attempt already hit your server but failed to respond, you'll receive the same batch twice. Deduplicate using the batch id (top-level) or per-event events[].id.

Within a batch, events are sorted oldest-first by createdAt. Across batches, ordering is not strictly guaranteed — always sequence state on your side by createdAt rather than arrival order.

Verifying signatures

Every request includes an X-Pigeon-Signature header that looks like:

X-Pigeon-Signature: t=1745175791,v1=a1b2c3d4...

The v1 value is an HMAC-SHA256 of `${timestamp}.${rawRequestBody}` keyed by your subscription secret. Reject any request where the signature doesn't match, or where t is more than five minutes off your wall clock (that's a replay-attack guard).

Node.js

import { createHmac, timingSafeEqual } from "crypto";

function verify(secret, rawBody, header) {
  const parts = Object.fromEntries(
    header.split(",").map((p) => p.split("=")),
  );
  const t = Number(parts.t);
  const v1 = parts.v1;
  if (!t || !v1) return false;
  if (Math.abs(Date.now() / 1000 - t) > 300) return false;

  const expected = createHmac("sha256", secret)
    .update(`${t}.${rawBody}`)
    .digest("hex");
  return timingSafeEqual(Buffer.from(expected), Buffer.from(v1));
}

C# (Blazor / ASP.NET)

using System.Security.Cryptography;
using System.Text;

bool Verify(string secret, string rawBody, string header) {
  var parts = header.Split(',')
    .Select(p => p.Split('='))
    .ToDictionary(p => p[0], p => p[1]);

  if (!long.TryParse(parts["t"], out var t)) return false;
  var now = DateTimeOffset.UtcNow.ToUnixTimeSeconds();
  if (Math.Abs(now - t) > 300) return false;

  var key = Encoding.UTF8.GetBytes(secret);
  var data = Encoding.UTF8.GetBytes($"{t}.{rawBody}");
  using var hmac = new HMACSHA256(key);
  var expected = Convert.ToHexString(hmac.ComputeHash(data)).ToLower();
  return CryptographicOperations.FixedTimeEquals(
    Encoding.UTF8.GetBytes(expected),
    Encoding.UTF8.GetBytes(parts["v1"])
  );
}

Python

import hmac, hashlib, time

def verify(secret: str, raw_body: bytes, header: str) -> bool:
    parts = dict(p.split("=") for p in header.split(","))
    try:
        t = int(parts["t"])
    except (KeyError, ValueError):
        return False
    if abs(time.time() - t) > 300:
        return False

    expected = hmac.new(
        secret.encode(),
        f"{t}.".encode() + raw_body,
        hashlib.sha256,
    ).hexdigest()
    return hmac.compare_digest(expected, parts.get("v1", ""))

Reliability and retries

  • At-least-once delivery. If your endpoint returns a non-2xx status, times out (after 10 seconds), or fails to resolve, we retry with exponential backoff. Up to 5 attempts spread over roughly 16 minutes.
  • Always idempotent on your side. Every delivery includes X-Pigeon-Event-Id and the same id in the body. Track the IDs you've already processed and no-op on repeats — we may deliver the same logical event more than once if a previous attempt failed in a way that still ran your code.
  • Auto-disable. After 50 consecutive failed deliveries, we automatically disable the subscription and email your organization owner. Re-enabling resets the failure counter so deliveries resume.

Headers we send

HeaderValue
Content-Typeapplication/json
User-AgentPigeonPerch-Webhooks/1.0
X-Pigeon-Signaturet=<unix-seconds>,v1=<hex-hmac>
X-Pigeon-Event-IdUnique per delivery. Use for dedup.
X-Pigeon-Event-Typee.g. contact.unsubscribed

Security best practices

  • Always verify signatures. Reject any request with a bad or missing X-Pigeon-Signature. An attacker who guesses your endpoint URL will trivially flood it with fake payloads otherwise.
  • Reject stale timestamps. The 5-minute tolerance is a sane default; tighten it if you can guarantee close clock sync.
  • Log the raw body. Signature verification has to run against the exact bytes we sent. If your framework parses JSON before you see the body, capture and log the raw stream first.
  • Rotate when needed. If a secret is exposed (Git commit, screenshot, leaked log), delete the webhook and create a new one immediately. The old secret stops being valid the moment the row is deleted.

Pairing with the bulk import endpoint

Webhooks are most useful in combination with POST /api/v1/contacts/import. Together they form a two-way sync without per-event instrumentation in your application:

  1. A nightly job in your application calls /contacts/import with your full contact list, mirroring fresh data into Pigeon Perch.
  2. An HTTPS endpoint in your application receives our contact.unsubscribed webhooks and updates your local database in real time.

That's it — one cron job and one endpoint, complete sync in both directions.