OpenAI-Compatible LLM API Overview
WaveSpeedAI LLM is an OpenAI-compatible LLM API for Claude, GPT, DeepSeek, Qwen, ByteDance Seed, and other large language models. You can test models in the web playground, compare pricing, then connect selected models to your app or coding agent.
What You Can Do
| Goal | Where to go |
|---|---|
| Try a model in the browser | LLM Playground |
| Make your first API request | Quick Start |
| See request fields and examples | Quick Start |
| Compare model families and pricing | Supported LLM Models |
| Connect coding tools | Connect Coding Agents |
Why Use WaveSpeedAI LLM
WaveSpeedAI provides one access layer for models from multiple providers, including Anthropic, OpenAI, DeepSeek, Qwen, ByteDance, and others. Instead of managing separate keys and billing accounts for every LLM provider, you can use one WaveSpeedAI API key and switch models by changing the model value.
How It Works
Your app or agent
-> https://llm.wavespeed.ai/v1
-> WaveSpeedAI routing and billing
-> Selected model providerThe API uses the OpenAI Chat Completions format. That means many OpenAI-compatible SDKs, tools, and agent frameworks can connect by changing the base URL and API key.
OpenAI-Compatible LLM API Basics
| Concept | Meaning |
|---|---|
| Base URL | https://llm.wavespeed.ai/v1 |
| API key | Your WaveSpeedAI API key |
| Model ID | Provider-prefixed name such as anthropic/claude-opus-4.7 |
| Messages | Chat history sent as system, user, and assistant messages |
| Streaming | Optional token-by-token response mode |
WaveSpeedAI vs First-Party Providers
Use first-party providers when you need native provider features, provider-specific APIs, or first-party account controls. Consider WaveSpeedAI LLM when you want an OpenAI-compatible API layer for multiple model providers, one API key, and unified billing.
| Topic | First-party provider | WaveSpeedAI LLM |
|---|---|---|
| API format | Provider-specific in many cases | OpenAI-compatible Chat Completions |
| Model switching | Usually within one provider family | Change the provider-prefixed model value |
| Billing | Separate provider billing | Unified WaveSpeedAI billing |
| Coding tools | Usually strongest for first-party tools | Connect through OpenAI-compatible clients, custom providers, or compatible gateways |
The protocol remains OpenAI-compatible even when the selected model is Claude, DeepSeek, Qwen, or another non-OpenAI model.
Recommended Reading Path
- Start with Quick Start if you want to make a request now.
- Use Supported LLM Models to choose candidate models.
- Use Connect Coding Agents if you want Claude Code, Codex, OpenClaw, or another developer agent to use WaveSpeedAI models.