Overview

OpenAI-Compatible LLM API Overview

WaveSpeedAI LLM is an OpenAI-compatible LLM API for Claude, GPT, DeepSeek, Qwen, ByteDance Seed, and other large language models. You can test models in the web playground, compare pricing, then connect selected models to your app or coding agent.

What You Can Do

GoalWhere to go
Try a model in the browserLLM Playground
Make your first API requestQuick Start
See request fields and examplesQuick Start
Compare model families and pricingSupported LLM Models
Connect coding toolsConnect Coding Agents

Why Use WaveSpeedAI LLM

WaveSpeedAI provides one access layer for models from multiple providers, including Anthropic, OpenAI, DeepSeek, Qwen, ByteDance, and others. Instead of managing separate keys and billing accounts for every LLM provider, you can use one WaveSpeedAI API key and switch models by changing the model value.

How It Works

Your app or agent
  -> https://llm.wavespeed.ai/v1
  -> WaveSpeedAI routing and billing
  -> Selected model provider

The API uses the OpenAI Chat Completions format. That means many OpenAI-compatible SDKs, tools, and agent frameworks can connect by changing the base URL and API key.

OpenAI-Compatible LLM API Basics

ConceptMeaning
Base URLhttps://llm.wavespeed.ai/v1
API keyYour WaveSpeedAI API key
Model IDProvider-prefixed name such as anthropic/claude-opus-4.7
MessagesChat history sent as system, user, and assistant messages
StreamingOptional token-by-token response mode

WaveSpeedAI vs First-Party Providers

Use first-party providers when you need native provider features, provider-specific APIs, or first-party account controls. Consider WaveSpeedAI LLM when you want an OpenAI-compatible API layer for multiple model providers, one API key, and unified billing.

TopicFirst-party providerWaveSpeedAI LLM
API formatProvider-specific in many casesOpenAI-compatible Chat Completions
Model switchingUsually within one provider familyChange the provider-prefixed model value
BillingSeparate provider billingUnified WaveSpeedAI billing
Coding toolsUsually strongest for first-party toolsConnect through OpenAI-compatible clients, custom providers, or compatible gateways

The protocol remains OpenAI-compatible even when the selected model is Claude, DeepSeek, Qwen, or another non-OpenAI model.

  1. Start with Quick Start if you want to make a request now.
  2. Use Supported LLM Models to choose candidate models.
  3. Use Connect Coding Agents if you want Claude Code, Codex, OpenClaw, or another developer agent to use WaveSpeedAI models.
© 2025 WaveSpeedAI. All rights reserved.