Connect Coding Agents to WaveSpeedAI LLM
Use this guide when you want a coding assistant or developer agent to run through WaveSpeedAI LLM using an OpenAI-compatible endpoint. It covers Claude Code, OpenAI Codex, OpenClaw, and other custom model clients.
Recommended Mental Model
Most coding tools are built around one provider. WaveSpeedAI LLM is usually easiest to test when the tool supports an OpenAI-compatible custom provider:
Coding tool
-> Custom OpenAI-compatible provider
-> https://llm.wavespeed.ai/v1
-> Selected WaveSpeedAI modelClaude Code, OpenAI Codex, OpenClaw, or Custom OpenAI?
| Tool | Recommended path |
|---|---|
| Claude Code | Use an Anthropic-compatible gateway that routes to WaveSpeedAI |
| OpenAI Codex | Use a custom OpenAI-compatible base URL if your setup supports it |
| OpenClaw | Add WaveSpeedAI as a custom provider |
| Other tools | Choose OpenAI-compatible or Custom OpenAI mode |
Universal OpenAI-Compatible Settings
| Setting | Value |
|---|---|
| Base URL | https://llm.wavespeed.ai/v1 |
| API key | Your WaveSpeedAI API key |
| Protocol | OpenAI Chat Completions |
| Model ID | Provider-prefixed ID, such as anthropic/claude-opus-4.7 |
Verify the Backend First
Before configuring an agent, confirm your key and model work with a small API call.
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_WAVESPEED_API_KEY" \
-d '{
"model": "anthropic/claude-opus-4.7",
"messages": [
{
"role": "user",
"content": "Reply with one sentence confirming you are ready for coding tasks."
}
]
}'Verified WaveSpeedAI Endpoint Checks
The examples on this page use WaveSpeedAI LLM endpoint values verified with https://llm.wavespeed.ai/v1/chat/completions.
| Check | Result |
|---|---|
anthropic/claude-opus-4.7 chat completion | Verified |
openai/gpt-5.5 chat completion | Verified |
openai/gpt-5.5 streaming response | Verified |
qwen/qwen3-coder chat completion | Verified |
deepseek/deepseek-chat chat completion | Verified |
bytedance-seed/seed-1.6-flash chat completion | Verified |
After the endpoint check passes, run a read-only prompt in your coding tool to confirm the selected provider, model, and project permissions before asking it to edit files.
Before a Long Coding Session
Run a small prompt first:
Reply with one sentence confirming which model you are.This helps confirm the agent is using the intended provider, key, and model before it starts reading or editing a large codebase.
Model Selection Tips
| Task | Suggested setup |
|---|---|
| Large codebase changes | Use a stronger reasoning or coding model |
| Small edits | Use a faster, cheaper model |
| Explaining code | Use a balanced chat model |
| Long-context repo analysis | Prioritize context window |
| Cost control | Use cheaper models for exploration, stronger models for final edits |
Claude Code
Claude Code uses Anthropic-style client settings. When you want to power Claude Code workflows with WaveSpeedAI LLM, use an Anthropic-compatible gateway that routes requests to the WaveSpeedAI OpenAI-compatible endpoint.
Claude Code
-> Anthropic-compatible gateway
-> https://llm.wavespeed.ai/v1Set the WaveSpeedAI values in the shell that launches Claude Code:
export ANTHROPIC_BASE_URL="https://llm.wavespeed.ai/v1"
export ANTHROPIC_AUTH_TOKEN="YOUR_WAVESPEED_API_KEY"
export ANTHROPIC_MODEL="anthropic/claude-opus-4.7"
claudeWindows PowerShell:
$env:ANTHROPIC_BASE_URL = "https://llm.wavespeed.ai/v1"
$env:ANTHROPIC_AUTH_TOKEN = "YOUR_WAVESPEED_API_KEY"
$env:ANTHROPIC_MODEL = "anthropic/claude-opus-4.7"
claudeIf your Claude Code setup loads environment variables from ~/.claude/settings.json, add the WaveSpeedAI values under env.
{
"env": {
"DISABLE_AUTOUPDATER": "1",
"ANTHROPIC_BASE_URL": "https://llm.wavespeed.ai/v1",
"ANTHROPIC_AUTH_TOKEN": "YOUR_WAVESPEED_API_KEY",
"ANTHROPIC_MODEL": "anthropic/claude-opus-4.7"
}
}Set ANTHROPIC_MODEL to the model name accepted by your Claude Code runtime. Use the WaveSpeedAI model ID when your setup passes model names through, or use the local alias that maps to a WaveSpeedAI model upstream.
OpenAI Codex
For Codex clients that support custom OpenAI-compatible providers, configure WaveSpeedAI as the provider and use a tested WaveSpeedAI model ID.
model = "openai/gpt-5.5"
model_provider = "wavespeed"
[model_providers.wavespeed]
name = "WaveSpeedAI LLM"
base_url = "https://llm.wavespeed.ai/v1"
env_key = "WAVESPEED_API_KEY"
wire_api = "chat"Then launch Codex with your WaveSpeedAI key in the environment:
export WAVESPEED_API_KEY="YOUR_WAVESPEED_API_KEY"
codexWindows PowerShell:
$env:WAVESPEED_API_KEY = "YOUR_WAVESPEED_API_KEY"
codexThe OpenAI-compatible endpoint path for Codex-style clients was verified with openai/gpt-5.5, including a streaming request. If your Codex version uses different config field names or a different wire protocol, keep the same WaveSpeedAI base URL, API key, and model ID while adapting the surrounding config to that client version.
After configuring Codex, start with a read-only prompt such as:
Summarize this repository in three bullets. Do not edit files.OpenClaw
For OpenClaw setups with custom OpenAI-compatible providers, add WaveSpeedAI as a provider and test a small prompt first.
{
agents: {
defaults: {
model: { primary: "wavespeed/anthropic/claude-opus-4.7" },
},
},
models: {
mode: "merge",
providers: {
wavespeed: {
baseUrl: "https://llm.wavespeed.ai/v1",
apiKey: "${WAVESPEED_API_KEY}",
api: "openai-completions",
models: [
{
id: "anthropic/claude-opus-4.7",
name: "Claude Opus 4.7 via WaveSpeedAI",
input: ["text"],
},
{
id: "openai/gpt-5.5",
name: "GPT-5.5 via WaveSpeedAI",
input: ["text"],
}
],
},
},
},
}If your OpenClaw version uses a different provider config shape, keep the same base URL, API key, and model IDs while adapting the surrounding fields to that version.
After selecting the provider in OpenClaw, start with a read-only prompt before asking it to modify a project.
Troubleshooting
| Problem | Likely cause |
|---|---|
| 401 error | Wrong API key or missing bearer token |
| Model not found | Model ID is incomplete or not available |
| Tool still calls OpenAI | Base URL was not changed |
| Claude Code request is not routed | Confirm the WaveSpeedAI base URL, API key, and upstream model mapping |
| Claude Code model is not recognized | Set ANTHROPIC_MODEL to the model name accepted by your Claude Code runtime |