Status card
Provider status guide
Is OpenAI down?
Check official OpenAI status and troubleshoot common API errors before blaming your code, key, quota, model name, network, or the provider.
Common errors
What usually breaks
Invalid or missing API key
Authentication failed before the request reached the model.
Permission denied
The key is recognized but not allowed to use this model, endpoint, region, or project.
Model or endpoint not found
The endpoint path or model identifier does not exist for this provider.
Rate limit or quota exceeded
Your request rate, token throughput, or quota exceeded the provider's limit.
Provider server error
The provider accepted the request path but failed internally.
Timeout
No response was available before your client timeout expired.
Troubleshooting checklist
Before calling it a OpenAI outage
- Check the official provider status page.
- Confirm your API key is valid and belongs to the right project.
- Confirm your account has credits, quota, and model access.
- Confirm the model name and endpoint path are correct.
- Retry with exponential backoff instead of immediate loops.
- Test with cURL outside your app.
- Try another model or provider if production is impacted.
- Enable fallback routing before the next incident.
Decision tree
Official status is green, but my OpenAI API still fails
- Run a minimal cURL command with the same key and model outside your app.
- If cURL fails with 401 or 403, check key scope, project, account access, and model permissions.
- If cURL returns 429, check quota, billing, token-per-minute limits, and concurrency.
- If only browser requests fail, treat it as CORS or preflight until cURL proves otherwise.
- If one model fails but another works, check model availability or fallback routing.
- If production fails but local works, compare environment variables, outbound network rules, and region.
FAQ
OpenAI API status questions
Is OpenAI down right now?
LLMPing links to the official OpenAI status source and shows troubleshooting guidance. If live fetch is unavailable, use the official status page for the latest provider-owned incident data.
How do I check OpenAI API status?
Start with the official status page, then run a minimal cURL request outside your app to separate provider issues from key, quota, model, network, or code issues.
Why is my OpenAI API request failing?
Common causes include invalid API keys, missing model access, rate limits, quota, wrong endpoint path, CORS, provider incidents, or upstream model capacity.
What should I do if OpenAI returns 429?
Check billing and quota, reduce concurrency, add exponential backoff with jitter, lower token usage, and route urgent production traffic to a fallback model or provider.
How can I monitor OpenAI API outages?
The current MVP provides manual debugging tools. Join the early access list if you want server-side checks, Slack or email alerts, and historical latency reports.
Want alerts before users notice?
Join the early access list for server-side LLM API monitoring.
Phase 2 may add scheduled checks, email alerts, Slack alerts, and historical latency reports.