How to Use Seedance 2.0: The Complete Guide to Creating Cinematic AI Videos (2026)
Learn how to use Seedance 2.0 step by step β prompt formulas, @ references, camera control, and character consistency. Try it free on Pixo.

Most Seedance 2.0 tutorials start with "go to Dreamina and sign up." That's fine β if you have a Chinese phone number, don't mind a restricted free tier, and are okay with burning credits on trial-and-error while learning the model's quirks.
I've spent the past several weeks testing Seedance 2.0 extensively β generating over 200 clips across different prompt styles, reference configurations, and use cases. What I've found is that the difference between a mediocre Seedance 2.0 output and a jaw-dropping cinematic clip usually isn't the model itself. It's whether you know how to talk to it.
This guide covers everything from your first generation to advanced multi-reference workflows. And if you want to skip the access headaches entirely, Pixo lets you use Seedance 2.0 with a simple signup β no regional restrictions, no enterprise email required, and new users get free generations to start.
What Makes Seedance 2.0 Different
Before diving into how-to, it helps to understand why Seedance 2.0 matters. Here's how it stacks up:
| Feature | Seedance 2.0 | Sora 2 | Kling 3.0 | Veo 3.1 |
|---|---|---|---|---|
| Native Audio | Yes (music, dialogue, SFX) | No | No | Yes |
| Multi-Shot Generation | Yes | No | No | No |
| Multimodal Input | Text + Image + Video + Audio | Text + Image | Text + Image | Text + Image |
| Max Reference Files | 12 (9 images + 3 video + 3 audio) | Limited | 1 image | 1 image |
| Output Resolution | Up to 2K | 1080p | 1080p | 4K |
| Camera Control | Director-level | Basic | Advanced | Moderate |
The standout feature is multimodal input β you can feed Seedance 2.0 images, video clips, and audio files simultaneously and tell it exactly how to use each one. No other consumer model does this.
Step 1: Choose Your Access Platform
Seedance 2.0 isn't available everywhere. Here's the current landscape:
Dreamina (Official) β ByteDance's own platform. Full feature access but the international version restricts Seedance 2.0 to invited creators. Free tier gives daily credits but is limited.
CapCut β Integrated into the editing workflow, but only available in select countries (Brazil, Indonesia, Thailand, Vietnam, and a few others). Not yet in the US or Europe.
Pixo β Multi-model platform with global access. No regional restrictions, no enterprise email needed. New users get free Seedance 2.0 generations. You can also access Kling, Veo, Hailuo, and other models in the same workspace β useful for comparing outputs or mixing models in a single project.
For this tutorial, I'll use Pixo as the example platform since it's the most accessible option for international users.
Step 2: Understand the Prompt Formula
Seedance 2.0 responds best to structured prompts. The proven formula is:
Subject + Action + Scene/Atmosphere + Camera Movement + Style/Lighting
Here's the thing: keep it between 30β100 words. I've tested this extensively. Prompts under 30 words produce generic, unpredictable results. Prompts over 100 words cause the model to cherry-pick random details while ignoring the ones you actually care about. The sweet spot is around 50β80 words.
Example: A Good Prompt
A young woman in a vintage red dress walks slowly through a sunlit European alley. Cobblestone street, hanging flower baskets, warm afternoon light. Slow tracking shot from behind, gradually revealing the street ahead. Cinematic film grain, soft golden hour lighting, shallow depth of field.
Example: A Bad Prompt
A woman walks down a street. Make it look cinematic.
The first prompt gives Seedance 2.0 clear instructions about subject, action, environment, camera, and style. The second leaves everything ambiguous, and the model will fill in the gaps randomly.
The One-Action Rule
This is the single most important tip for beginners: one clear action per shot. If your prompt says "she walks to the table, picks up a glass, turns around and waves," Seedance 2.0 will likely botch at least one of those actions. Break it into separate generations instead.
Step 3: Master the @ Reference System
The @ reference system is Seedance 2.0's killer feature β and the most misunderstood one. Here's how it works:
When you upload files, the model assigns labels automatically:
- Images β
@Image1,@Image2, etc. - Video clips β
@Video1,@Video2 - Audio files β
@Audio1,@Audio2
You then reference these directly in your prompt:
@Image1 is the main character. She walks through a rainy Tokyo street at night. @Audio1 plays as the background music. Slow dolly forward shot, neon reflections on wet pavement, cinematic color grading.
Reference Hierarchy
The model prioritizes references in this order:
- @Audio β Used for lip-sync and beat matching. If you upload a voiceover, the generated character's mouth will sync to it.
- @Video β Transfers motion trajectories and camera language. Upload a clip with specific camera movement, and Seedance 2.0 will replicate it.
- @Image β Locks character appearance (face, clothing, style). Best results come from mid-body portraits with clean backgrounds.
Pro Tips for References
For character consistency: Use a waist-up (mid-shot) portrait with a simple background. Transparent PNGs work best β remove the background so Seedance 2.0 can focus purely on the subject.
For multi-angle consistency: Prepare 2β4 images of the same character from different angles. This gives the model a comprehensive understanding of the character's geometry and dramatically reduces face drift during head rotations.
For motion transfer: Trim reference videos to under 15 seconds. Longer clips confuse the model. If you have a 30-second clip with the perfect camera move, cut out just the 5β8 seconds you need.
Step 4: Camera Control That Actually Works
Seedance 2.0 handles camera work that other models struggle with. Here's what works reliably:
| Camera Move | Prompt Phrasing | Quality |
|---|---|---|
| Dolly zoom | "slow dolly zoom into the subject's face" | Excellent |
| Tracking shot | "tracking shot from left to right following the subject" | Excellent |
| POV | "first-person POV walking through the hallway" | Good |
| Rack focus | "rack focus from foreground object to subject in background" | Good |
| Crane shot | "crane shot rising from ground level to aerial view" | Moderate |
| Handheld | "subtle handheld movement, documentary style" | Excellent |
Critical rule: one camera movement per shot. If you stack "pan + zoom + tracking" in one prompt, you'll get jittery output that looks like a broken gimbal. Pick one and commit.
Step 5: Using Seedance 2.0 on Pixo
On Pixo, you have two ways to use Seedance 2.0:
Option A: Talk to the AI Agent
The simplest approach. Just tell Pixo's AI Agent what you want:
"I want to create a 15-second video using Seedance 2.0. A cat sitting on a windowsill watching rain, cozy indoor lighting, soft lo-fi music in the background, slow push-in camera movement."
The Agent will handle model selection, prompt optimization, and parameter settings for you. It can even suggest improvements to your description that will produce better results from Seedance 2.0 specifically.
Option B: Use Canvas for Manual Control
If you prefer hands-on control, open Pixo's Canvas and:
- Select Seedance 2.0 as your model
- Upload your reference files (images, video, audio)
- Write your prompt using the @ reference tags
- Set duration (up to 15 seconds), resolution, and aspect ratio
- Generate
Canvas is also where you can build multi-scene projects β using Seedance 2.0 for hero shots and switching to Kling or Hailuo for other scenes, all within the same storyboard.
Common Mistakes to Avoid
After 200+ generations, here are the patterns I see beginners repeat:
Mistake 1: Overloading the prompt. If your prompt doesn't fit in a tweet, it's probably too long. Seedance 2.0 starts ignoring details past ~100 words.
Mistake 2: Too many characters. Three or more characters in one scene causes faces to drift, bodies to warp, and sometimes extra limbs to appear. Keep it to two characters maximum.
Mistake 3: Not assigning reference roles. Uploading files without telling the model what each one is for produces chaotic results. Always use @ tags and explicitly state each file's role: "@Image1 is the main character. @Audio1 is the background music."
Mistake 4: Generating long clips first. Start with 3β5 second test clips to dial in your prompt and references. Once you're happy with the look, generate the full 15-second version. This saves massive credits.
Mistake 5: Ignoring lighting in prompts. Lighting is the single highest-leverage element in any Seedance 2.0 prompt. "Soft golden hour lighting" or "dramatic rim light against a dark background" will dramatically improve your output quality. Leaving it out forces the model to guess.
What If Seedance 2.0 Isn't the Right Model for Your Shot?
Here's something most guides won't tell you: Seedance 2.0 isn't always the best choice.
For fast B-roll and social cuts, Hailuo is faster and cheaper. For photorealistic product shots, Veo delivers better results. For motion graphics and stylized content, Kling 3.0 offers more control.
On Pixo, you don't have to choose just one. The AI Agent can recommend the best model for each shot in your project, and you can mix models within a single storyboard. Use Seedance 2.0 where its strengths matter most β native audio, multi-shot storytelling, cinematic camera work β and let other models handle the rest.
This multi-model approach means your Seedance 2.0 generations are always intentional, not wasted on shots where a simpler model would suffice.
Quick-Start Prompt Templates
Copy these and modify for your use case:
Cinematic Narrative
A lone figure stands at the edge of a cliff overlooking a misty valley at dawn. Wind moves through their hair and coat. Slow dolly forward shot, epic landscape composition, golden hour lighting with volumetric fog, cinematic color grading.
Product Showcase
@Image1 is the product. The product rotates slowly on a reflective dark surface. Soft studio lighting from above, clean white highlights, premium feel. Slow 360-degree orbit shot, shallow depth of field, commercial photography style.
Social Media / UGC Style
@Image1 is the character. She records herself unboxing a package with excited reaction. Handheld phone camera angle, warm indoor lighting, casual authentic feel. @Audio1 plays as trending background music.
Anime / Stylized
An anime-style warrior draws a glowing sword in a moonlit bamboo forest. Cherry blossom petals drift through the air. Dynamic camera tracking from low angle, vibrant colors, Studio Ghibli-inspired atmospheric lighting.
Start Creating
The best way to learn Seedance 2.0 is to generate. Grab one of the templates above, tweak it for your idea, and see what comes out. Then iterate β adjust the camera, change the lighting, swap a reference image.
Sign up on Pixo to get free Seedance 2.0 generations with no regional restrictions. You'll also have access to Kling, Veo, Hailuo, and other models in the same workspace β so you can compare outputs and find the right model for every shot.
The gap between "I typed a prompt and got something okay" and "I directed a cinematic clip that looks professional" is smaller than you think. It's mostly about structured prompts, intentional references, and knowing when to use which model.
Try Seedance 2.0 free on Pixo β no VPN, no enterprise email, no regional restrictions.


