Together AI
Operational Model providerOpen-model inference + fine-tuning at scale — Llama, Mixtral, FLUX.
API endpoint reachable · 7h ago
Response time · 24h
Measured by Prismix probes — not the vendor's status feed.
930ms
p50
- min
- 780ms
- p50
- 930ms
- p95
- 1258ms
- p99
- 1540ms
Embed this live badge
Updates ~30s · append ?theme=dark for dark READMEs
Light
Dark
Copy snippet ↓
Markdown · light
[](https://prismix.dev/service/together) Markdown · dark
[](https://prismix.dev/service/together) HTML
<a href="https://prismix.dev/service/together"><img src="https://prismix.dev/api/badge/together.svg" alt="Together AI status"></a> No public status API
Together AI doesn't publish a machine-readable status feed. We track it by probing its main endpoint every minute — reachable = operational, unreachable = major outage. Incident timeline and component breakdown aren't available for this provider until they (or a community status mirror) publishes a feed.
Common questions about Together AI
Is Together AI down right now?
Together AI is currently reporting operational. We last checked under 5 minutes ago.
Where does this status data come from?
We probe Together AI's public endpoint every 5 minutes and record reachability + latency. No login or API key required.
Can I get email or webhook alerts when Together AI breaks?
Yes — sign in, star Together AI on the status dashboard, then add an email or Discord/Slack webhook on /alerts. Free tier gets 1 destination per channel; Pro gets 5.
What does "Operational" mean?
All systems operating normally. No active incidents reported.
Related model providers
Get notified when Together AI changes state
One-click email alerts for Together AI only. No account. No Pro tier. Unsubscribe in every email.
Want to subscribe to multiple services + control severity threshold + add quiet hours? Create a free account instead.
Last refreshed 46m ago · cached · data from https://status.together.ai