WaveSpeed CLI

WaveSpeed CLI

Use WaveSpeed CLI to run WaveSpeedAI models from your terminal. The CLI gives image, video, audio, 3D, and other model workflows the same basic entry point:

wavespeed run <model_id> --json

It is useful when you want to generate assets manually, automate model calls from scripts or CI jobs, or let coding agents such as Codex, Claude Code, Cursor, and similar tools call WaveSpeedAI from the terminal.

Visit the WaveSpeed CLI page for the product overview and installation entry point.

Install

WaveSpeed CLI requires Node.js 18 or later.

npm install -g @wavespeed/cli
wavespeed --version

Sign In

Use interactive login when you are working locally:

wavespeed login
wavespeed status

Use non-interactive login for remote machines or automation:

wavespeed login --api-key <API_KEY> --no-browser
wavespeed status

To remove the stored API key from the machine:

wavespeed logout

Keep API keys out of git, README files, frontend code, and shared logs. The CLI stores the key in local user configuration after login.

Most CLI tasks follow this order:

models -> schema -> price -> run -> download or history

For example:

# 1. Search for a model
wavespeed models z-image
 
# 2. Inspect the model inputs
wavespeed schema wavespeed-ai/z-image/turbo
 
# 3. Estimate the price before generating
wavespeed price wavespeed-ai/z-image/turbo -p "a product photo"
 
# 4. Run the model and print JSON
wavespeed run wavespeed-ai/z-image/turbo -p "a product photo" --json
 
# 5. Run and download outputs
wavespeed run wavespeed-ai/z-image/turbo -p "a product photo" --json --download

Search Models

Use models to browse the WaveSpeedAI model catalog.

wavespeed models
wavespeed models z-image
wavespeed models video
wavespeed models lora
wavespeed models --popular
wavespeed models z-image --popular
wavespeed models --type text-to-image
wavespeed models --type image-to-video
wavespeed models --type text-to-video
wavespeed models --type image-to-image
wavespeed models --category text-to-image
wavespeed models --refresh
wavespeed models --no-cache
wavespeed models z-image --json

models does not use a --limit option. Use search terms, --popular, --type, or --category to narrow results. Commands such as history use --limit when you need to control the number of returned records.

Inspect Model Inputs

Use schema before running an unfamiliar model:

wavespeed schema <model_id>
wavespeed schema <model_id> --json
wavespeed schema <model_id> --refresh

You can also inspect runnable flags for a model:

wavespeed run <model_id> -h

For example:

wavespeed schema wavespeed-ai/z-image/turbo

The schema shows required and optional inputs such as prompt, size, seed, output_format, image URLs, video settings, or other model-specific fields.

Run a Model

The simplest run uses a model ID and a prompt:

wavespeed run wavespeed-ai/z-image/turbo -p "a cyberpunk skyline" --json

Pass model-specific inputs with -i:

wavespeed run wavespeed-ai/z-image/turbo \
  -p "a clean product photo of a glass teapot" \
  -i size=1024*1024 \
  -i output_format=png \
  --json \
  --download

Common run options:

OptionUse
-p, --prompt <text>Shortcut for prompt=<text>
-i, --input key=valuePass one model input; repeat it for multiple inputs
--input-file <path>Pass a full JSON input object from a file
--jsonEmit one JSON object on stdout for scripts and agents
--downloadDownload generated outputs to the default output directory
--download "./out/{index}.{ext}"Download with a path template
--output-dir <dir>Set the download directory for this run
--syncUse synchronous mode when the model supports it

For scripts and AI agents, prefer --json so the caller can parse the result reliably. Add --download when local files are needed.

Upload Files

Image-to-image, image editing, image-to-video, and similar models often require a hosted file URL. Upload local files first:

wavespeed upload ./input.png
wavespeed upload ./input.png --json

Upload multiple files:

wavespeed upload ./a.png ./b.png --json

Then pass the returned CDN URL to a model input:

wavespeed run <model_id> -i image="<url>" -p "animate this image" --json

Download Results

Use download when you already have output URLs:

wavespeed download <url>
wavespeed download <url> --json
wavespeed download <url> -o ./output.png
wavespeed download <url1> <url2> --output-dir ./outputs

Price and Balance

Check your account balance:

wavespeed balance
wavespeed balance --json
wavespeed balance --top-up

Estimate the cost of a run before generating:

wavespeed price wavespeed-ai/z-image/turbo -p "a product photo"
wavespeed price wavespeed-ai/z-image/turbo -i prompt="a product photo" --json

price calculates the estimated cost for the given inputs. It does not generate output and does not charge for a prediction.

Open or print the top-up page:

wavespeed top-up
wavespeed top-up --print

History

List recent predictions:

wavespeed history
wavespeed history --limit 10
wavespeed history --page 2 --limit 20
wavespeed history --model wavespeed-ai/z-image/turbo
wavespeed history --status completed
wavespeed history --json

Show one prediction:

wavespeed show <prediction_id>
wavespeed show <prediction_id> --json
wavespeed show <prediction_id> --download

Delete predictions:

wavespeed delete <prediction_id>
wavespeed delete <id1> <id2> -y
wavespeed delete <prediction_id> --json

Project Configuration

Initialize a project configuration file:

wavespeed init -y

This creates wavespeed.json:

{
  "$schema": "https://wavespeed.ai/schema/cli.json",
  "defaultModel": "google/nano-banana-2/text-to-image",
  "outputDir": "wavespeed-output",
  "aliases": {}
}

With defaultModel, you can omit the model ID:

wavespeed run -p "a product photo" --json --download

Use aliases to share repeatable model settings with a team:

{
  "defaultModel": "google/nano-banana-2/text-to-image",
  "outputDir": "wavespeed-output",
  "aliases": {
    "hero": {
      "model": "wavespeed-ai/z-image/turbo",
      "input": {
        "size": "1024*1024",
        "output_format": "png"
      }
    }
  }
}

Then run the alias:

wavespeed run hero -p "a landing page hero image" --json --download

View available aliases:

wavespeed aliases
wavespeed aliases --json

Global Configuration

Use config for machine-level defaults:

wavespeed config
wavespeed config --base-url https://api.wavespeed.ai
wavespeed config --default-model wavespeed-ai/z-image/turbo
wavespeed config --output-dir wavespeed-output
wavespeed config --reset

Use wavespeed.json for project settings that can be shared with a repository. Keep login credentials and personal defaults in local user configuration.

Use with Coding Agents

WaveSpeed CLI is agent-ready. Codex, Claude Code, Cursor, and other coding agents can discover models, inspect schemas, estimate prices, run generations, and parse JSON output directly from the terminal.

Install the agent skill:

wavespeed skill install

This creates:

.claude/skills/wavespeed/SKILL.md

After the skill is installed, you can ask a coding agent for tasks such as:

Find a text-to-image model, generate a product photo, and download it locally.

The agent should follow the same CLI flow:

wavespeed models --type text-to-image --popular
wavespeed schema <model_id>
wavespeed price <model_id> -p "..."
wavespeed run <model_id> -p "..." --json --download

This keeps the integration simple: agents call the same CLI commands a developer would run, and JSON output gives them a stable result to read.

Script Automation

Node.js

import { execFile } from "node:child_process";
import { promisify } from "node:util";
 
const execFileAsync = promisify(execFile);
 
const { stdout } = await execFileAsync("wavespeed", [
  "run",
  "wavespeed-ai/z-image/turbo",
  "-p",
  "a clean product photo",
  "--json",
  "--download"
]);
 
const result = JSON.parse(stdout);
console.log(result.outputs);
console.log(result.saved);

Python

import json
import subprocess
 
result = subprocess.run(
    [
        "wavespeed",
        "run",
        "wavespeed-ai/z-image/turbo",
        "-p",
        "a clean product photo",
        "--json",
        "--download",
    ],
    capture_output=True,
    text=True,
    check=True,
)
 
data = json.loads(result.stdout)
print(data["outputs"])
print(data["saved"])
© 2025 WaveSpeedAI. All rights reserved.