Browse ModelsViduVidu Image To Video 2.0

Vidu Image To Video 2.0

Vidu Image To Video 2.0

Playground

Try it on WavespeedAI!

Vidu Image to Video 2.0 converts images into smooth-transition videos with exceptional visual quality and diverse, natural motion. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Vidu Image-to-Video 2.0

Vidu Image-to-Video 2.0 is a powerful image-to-video generation model that transforms static images into dynamic, cinematic videos. Upload an image, describe the motion you want, and control the movement intensity — from subtle animations to dramatic action sequences.


Why It Stands Out

  • Image-driven generation: Animate any image while preserving its original style and composition.
  • Movement amplitude control: Choose from auto, small, medium, or large motion intensity.
  • Prompt-guided motion: Describe camera movements, actions, and expressions in detail.
  • Prompt Enhancer: Built-in AI-powered prompt optimization for better results.
  • Cinematic quality: Produces smooth, professional-looking video output.
  • Reproducibility: Use the seed parameter to recreate exact results.

Parameters

ParameterRequiredDescription
promptYesText description of desired motion and action.
imageYesSource image to animate (upload or public URL).
movement_amplitudeNoMotion intensity: auto, small, medium, large (default: auto).
seedNoSet for reproducibility; leave empty for random.

Movement Amplitude Options

SettingBest For
autoLet the model decide based on prompt and image content
smallSubtle animations, breathing, blinking, gentle movements
mediumModerate motion, walking, talking, natural gestures
largeDynamic action, running, dramatic movements, action scenes

How to Use

  1. Upload your source image — drag and drop a file or paste a public URL.
  2. Write a prompt describing the motion, camera movement, and expressions you want. Use the Prompt Enhancer for AI-assisted optimization.
  3. Select movement amplitude — choose the intensity of motion that fits your scene.
  4. Set a seed (optional) for reproducible results.
  5. Click Run and wait for your video to generate.
  6. Preview and download the result.

Best Use Cases

  • Character Animation — Bring characters to life with expressions and movements.
  • Social Media Content — Create engaging video posts from static images.
  • Marketing & Advertising — Animate product images and promotional content.
  • Storytelling — Generate cinematic scenes from artwork and stills.
  • Wildlife & Nature — Add realistic motion to animal and nature photos.

Pricing

OutputPrice
Per video$0.30

Pro Tips for Best Quality

  • Use high-resolution, well-lit source images for optimal results.
  • Be detailed in your prompt — describe camera movement, subject actions, and expressions.
  • Match movement_amplitude to your scene: “small” for portraits, “large” for action.
  • Include cinematic keywords like “slowly zooms in,” “medium shot,” or “tracking shot.”
  • Describe subtle details like breathing, ear twitching, or expression changes.
  • Fix the seed when iterating to compare different amplitude settings.

Notes

  • Ensure uploaded image URLs are publicly accessible.
  • Processing time varies based on current queue load.
  • Please ensure your prompts comply with content guidelines.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/vidu/image-to-video-2.0" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "movement_amplitude": "auto"
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
imagestringYes-An image to be used as the start frame of the generated video. For fields that accept images: Only accepts 1 image; Images Assets can be provided via URLs or Base64 encode; You must use one of the following codecs: PNG, JPEG, JPG, WebP; The aspect ratio of the images must be less than 1:4 or 4:1; All images are limited to 50MB; The length of the base64 decode must be under 50MB, and it must include an appropriate content type string.
promptstringYes-The positive prompt for the generation.
movement_amplitudestringNoautoauto, small, medium, largeThe movement amplitude of objects in the frame. Defaults to auto, accepted value: auto small medium large.
seedintegerNo--1 ~ 2147483647The random seed to use for the generation.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.