Skip to main content
live · status.pulsehq.tech

All systems operational.

No incidents in the last 24 hours. Last incident: 8 days ago (degraded retrieval, 18 min, post-mortem published).

Last refresh · 14s ago30-day uptime · 99.984%90-day uptime · 99.971%

Component health

Live status of every public-facing system. The little bars are the last 60 minutes (one bar = 1 minute), p50 latency.

Pulse Web App

app.pulsehq.tech

Operational100.00%
p50 · 142ms

Public API

api.pulsehq.tech/v1

Operational99.99%
p50 · 88ms

MCP Server

mcp.pulsehq.tech

Operational100.00%
p50 · 64ms

Retrieval & map graph

internal · query path

Operational99.94%
p50 · 211ms

Briefing delivery

scheduled jobs · 06:00–10:00 local

Operational99.99%
queue · 2.1s

Inference · Anthropic route

via zero-retention contract

Operational99.97%
p50 · 1.4s

Inference · OpenAI route

via zero-retention contract

Degraded99.71%
p50 · 2.3s ↑

Connectors · Google Workspace

OAuth + sync

Operational100.00%
sync · 4m lag

Connectors · Slack

events + sync

Operational99.99%
events · 1.1s

Connectors · Salesforce

bulk + streaming

Operational99.96%
sync · 6m lag

Audit log stream

SIEM webhook · per tenant

Operational100.00%
delivery · 320ms

Auth · SSO & SCIM

SAML · OIDC · SCIM 2.0

Operational100.00%
p50 · 56ms

Stripe billing webhook

subscription events

Maintenance · 03:00–04:00 UTC99.98%
paused · scheduled

90-day uptime

One cell per day, oldest on the left. Hover for the per-day percentage.

App + API99.971%
MCP server99.984%
Retrieval99.940%
Inference (combined)99.892%
90 days agotoday

Recent incidents

Degraded retrieval latency on us-east cluster

Apr 23 · 14:08–14:26 UTC · 18 min
Resolved · degraded
14:08

Detected,p99 retrieval latency rose above the 800ms threshold; pager fired.

14:11

Investigating,engineer on-call confirmed elevated tail latency; isolated to one shard.

14:17

Identified,a noisy-neighbour query was rebuilding an index; rate-limit applied.

14:23

Mitigated,p99 returned below threshold; no failed requests, only slow ones.

14:26

Monitoring,system stable; declared resolved.

Customers affected · approx. 9% of US-east tenantsSLA breach · nonePost-mortem · published Apr 25

OpenAI inference route, sustained 5xx

Apr 11 · 19:42–20:14 UTC · 32 min
Resolved · partial outage
19:42

Detected,upstream provider returning 503s; circuit-breaker tripped.

19:44

Mitigating,auto-failover rerouted briefing & agent runs to Anthropic where customer policy permits.

19:58

Customers notified,27 tenants pinned to OpenAI received a banner; agent runs queued.

20:14

Resolved,upstream recovered; queued runs drained over the next 11 minutes.

Customers affected · 27 (pinned-OpenAI)SLA breach · 4 customers, credit issuedPost-mortem · published Apr 13

Scheduled · billing webhook maintenance

May 04 · 03:00–04:00 UTC · planned
Scheduled
apr 27

Announced,Stripe webhook handler upgrade. Subscription events queued; no customer-visible impact.

may 04

Window,03:00–04:00 UTC. Sign-ups proceed; receipts may be delayed up to 60 min.

Customers affected · none (queued)Notice · 7 days

Get notified the way you actually use.

Subscribe to incident updates over email, RSS, Slack, or webhook. We send three messages per incident: detected, mitigated, resolved. No marketing in the same channel.