Vidu Image To Video 2.0
Playground
Try it on WavespeedAI!Vidu Image to Video 2.0 converts images into smooth-transition videos with exceptional visual quality and diverse, natural motion. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Vidu Image-to-Video 2.0
Vidu Image-to-Video 2.0 is a powerful image-to-video generation model that transforms static images into dynamic, cinematic videos. Upload an image, describe the motion you want, and control the movement intensity — from subtle animations to dramatic action sequences.
Why It Stands Out
- Image-driven generation: Animate any image while preserving its original style and composition.
- Movement amplitude control: Choose from auto, small, medium, or large motion intensity.
- Prompt-guided motion: Describe camera movements, actions, and expressions in detail.
- Prompt Enhancer: Built-in AI-powered prompt optimization for better results.
- Cinematic quality: Produces smooth, professional-looking video output.
- Reproducibility: Use the seed parameter to recreate exact results.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | Text description of desired motion and action. |
| image | Yes | Source image to animate (upload or public URL). |
| movement_amplitude | No | Motion intensity: auto, small, medium, large (default: auto). |
| seed | No | Set for reproducibility; leave empty for random. |
Movement Amplitude Options
| Setting | Best For |
|---|---|
| auto | Let the model decide based on prompt and image content |
| small | Subtle animations, breathing, blinking, gentle movements |
| medium | Moderate motion, walking, talking, natural gestures |
| large | Dynamic action, running, dramatic movements, action scenes |
How to Use
- Upload your source image — drag and drop a file or paste a public URL.
- Write a prompt describing the motion, camera movement, and expressions you want. Use the Prompt Enhancer for AI-assisted optimization.
- Select movement amplitude — choose the intensity of motion that fits your scene.
- Set a seed (optional) for reproducible results.
- Click Run and wait for your video to generate.
- Preview and download the result.
Best Use Cases
- Character Animation — Bring characters to life with expressions and movements.
- Social Media Content — Create engaging video posts from static images.
- Marketing & Advertising — Animate product images and promotional content.
- Storytelling — Generate cinematic scenes from artwork and stills.
- Wildlife & Nature — Add realistic motion to animal and nature photos.
Pricing
| Output | Price |
|---|---|
| Per video | $0.30 |
Pro Tips for Best Quality
- Use high-resolution, well-lit source images for optimal results.
- Be detailed in your prompt — describe camera movement, subject actions, and expressions.
- Match movement_amplitude to your scene: “small” for portraits, “large” for action.
- Include cinematic keywords like “slowly zooms in,” “medium shot,” or “tracking shot.”
- Describe subtle details like breathing, ear twitching, or expression changes.
- Fix the seed when iterating to compare different amplitude settings.
Notes
- Ensure uploaded image URLs are publicly accessible.
- Processing time varies based on current queue load.
- Please ensure your prompts comply with content guidelines.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/vidu/image-to-video-2.0" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"movement_amplitude": "auto"
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| image | string | Yes | - | An image to be used as the start frame of the generated video. For fields that accept images: Only accepts 1 image; Images Assets can be provided via URLs or Base64 encode; You must use one of the following codecs: PNG, JPEG, JPG, WebP; The aspect ratio of the images must be less than 1:4 or 4:1; All images are limited to 50MB; The length of the base64 decode must be under 50MB, and it must include an appropriate content type string. | |
| prompt | string | Yes | - | The positive prompt for the generation. | |
| movement_amplitude | string | No | auto | auto, small, medium, large | The movement amplitude of objects in the frame. Defaults to auto, accepted value: auto small medium large. |
| seed | integer | No | - | -1 ~ 2147483647 | The random seed to use for the generation. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |