minimax/video-01

Generate 6s videos with prompts or images. (Also known as Hailuo). Use a subject reference to make a video with a character and the S2V-01 model.

image-to-video

text-to-video

Use prompt optimizer
Attention: The bonus credit does not apply to this model. Please top up to continue.

Idle

https://d2g64w682n9w0w.cloudfront.net/media/images/1743078028312327284_EGPLIECy.webp

Your request will cost $0.5 per video,
For $1 you can run this model approximately 2 times.

README

minimax/video-01 is an advanced AI-native video generation model developed by MiniMax and hosted on WaveSpeedAI. This model enables the creation of high-definition videos at 720p resolution and 25fps, featuring cinematic camera movements such as panning, tilting, and tracking. It supports text-to-video, image-to-video, and subject-to-video modes, allowing users to generate dynamic content based on text descriptions or reference images.

Key Features

  • High-Resolution Output: Produces 720p videos with cinematic quality, featuring smooth camera movements such as panning, tilting, and tracking.
  • Text Responsiveness: Delivers precise alignment with complex prompts, ensuring outputs match user expectations.
  • Style Versatility: Supports a wide range of artistic and realistic styles, enabling diverse creative expressions.
  • Efficiency: Rapid generation of visually striking content, with current support for videos up to 6 seconds long (with plans to extend duration in future updates).
  • Subject Reference: Users can upload a reference image to generate videos with consistent character appearances, making it easy to create personalized content.

ComfyUI

minimax/video-01 is also available on ComfyUI, providing local inference capabilities through a node-based workflow, ensuring flexible and efficient image generation on your system.

Limitations

  • Video Duration: Currently supports generating videos up to 6 seconds long; future updates aim to extend this duration.
  • Input Sensitivity: The quality and consistency of generated videos depend significantly on the quality of the input text or image; subtle variations may lead to output variability.
  • Creative Focus: Designed for creative video synthesis; not intended for generating factually accurate or reliable content.

Out-of-Scope Use

The model and its derivatives may not be used in any way that violates applicable national, federal, state, local, or international law or regulation, including but not limited to:

  • Exploiting, harming, or attempting to exploit or harm minors, including solicitation, creation, acquisition, or dissemination of child exploitative content.
  • Generating or disseminating verifiably false information with the intent to harm others.
  • Creating or distributing personal identifiable information that could be used to harm an individual.
  • Harassing, abusing, threatening, stalking, or bullying individuals or groups.
  • Producing non-consensual nudity or illegal pornographic content.
  • Making fully automated decisions that adversely affect an individual’s legal rights or create binding obligations.
  • Facilitating large-scale disinformation campaigns.

Accelerated Inference

Our accelerated inference approach leverages advanced optimization technology from WavespeedAI. This innovative fusion technique significantly reduces computational overhead and latency, enabling rapid image generation without compromising quality. The entire system is designed to efficiently handle large-scale inference tasks while ensuring that real-time applications achieve an optimal balance between speed and accuracy. For further details, please refer to the blog post.