Wan 2.7 by Alibaba
Wan 2.7 AI Video Generator
Wan 2.7 is Alibaba's newest Wan video system, now framed around text-to-video, image-to-video, reference-to-video, and prompt-based video editing. It pushes harder on references, tracked revisions, and more directed motion so a creator can generate, steer, and repair a shot inside one model family.
Wan 2.7 preview
Reference, edit, and push the shot furtherWhat Wan 2.7 can do
Key features of Wan 2.7
The interesting part of Wan 2.7 is not one isolated spec. It is the way generation, reference control, and video revision start behaving like one connected creative workflow.
four-mode system
Switch between text-to-video, image-to-video, reference-to-video, and prompt-based video editing without changing creative logic midway through production.
reference stacking
Mix images and videos in one reference-to-video setup so character identity, motion cues, and scene intent survive the jump into a new generated shot.
voice carryover
Pull appearance and voice from reference clips when applicable, giving performance-led scenes a firmer anchor than text prompting alone.
shot endings
Guide motion from a first frame to a last frame when the scene has to land on one exact pose, pack shot, or final composition.
tracked edits
Edit only the masked region while the model tracks the target through motion, making subject swaps, prop changes, and background fixes feel less destructive.
action rhythm
Lean into emotional beats, action-heavy moments, and more cinematic cutting instead of treating Wan 2.7 as another generic one-shot generator.
Wan 2.7 video showcase
Wan 2.7 video showcase
These sample clips show the lane Wan 2.7 fits best: reference-led motion, premium short scenes, and edits or endings that feel more directed than a one-pass generation.
Luxury pour shot
A glossy food scene that shows Wan 2.7 holding liquid motion, specular highlights, and shallow-depth styling without the shot falling apart.
Templates & prompts
Wan 2.7 Templates & Prompts
Explore 10 vector-matched works selected from Basecamp-starred creators on Alici to see the scene directions that fit Wan 2.7 especially well.
Luxury Elevator Loop
Use This TemplateMonster Energy Apocalypse Transformation
Use This TemplateMiwa Angelic Transformation
Use This TemplateGolden Hour School Music Room
Use This TemplateAstronaut Girl Space Capsule
Use This TemplateCinematic Anime VFX
Use This TemplateSpaceship Corridor Monster Fight
Use This TemplateKulrik And Boon Fog Forest Ambush
Use This TemplateShanghai Tsunami
Use This TemplateChinese Dragon Anime Fantasy Montage
Use This TemplateLuxury Elevator Loop
Use This TemplateMonster Energy Apocalypse Transformation
Use This TemplateMiwa Angelic Transformation
Use This TemplateGolden Hour School Music Room
Use This TemplateAstronaut Girl Space Capsule
Use This TemplateCinematic Anime VFX
Use This TemplateSpaceship Corridor Monster Fight
Use This TemplateKulrik And Boon Fog Forest Ambush
Use This TemplateShanghai Tsunami
Use This TemplateChinese Dragon Anime Fantasy Montage
Use This TemplateLuxury Elevator Loop
Use This TemplateMonster Energy Apocalypse Transformation
Use This TemplateMiwa Angelic Transformation
Use This TemplateGolden Hour School Music Room
Use This TemplateAstronaut Girl Space Capsule
Use This TemplateCinematic Anime VFX
Use This TemplateSpaceship Corridor Monster Fight
Use This TemplateKulrik And Boon Fog Forest Ambush
Use This TemplateShanghai Tsunami
Use This TemplateChinese Dragon Anime Fantasy Montage
Use This TemplateHow to use Wan 2.7 on Alici
How to create AI videos with Wan 2.7
The goal is not just to prompt one good clip. The goal is to choose the right creative lane, decide what must stay consistent, and only regenerate what actually needs to change.
Open Wan 2.7Choose mode and references
Start in the lane that matches the job: text-to-video, image-to-video, reference-to-video, or edit. Reference-to-video can combine up to 5 files total, including both images and video clips.
Direct motion and changes
Describe the scene beat, camera behavior, and what must stay consistent. Use first and last frames for guided image-led motion, or masks when only one region of an existing video needs to change.
Render and compare passes
Generate short clips at up to 1080P, compare 2 to 15 second takes, and decide whether the shot needs a fresh generation, a reference-led remake, or one tracked local edit pass.
Featured creators
Top Wan 2.7 Creators on Instagram
These creators reflect the polished short-form directions Wan 2.7 fits well, from premium product scenes and creator edits to cinematic fantasy and fashion-led motion.
Aria Cruz | Influencer AI
@soy_aria_cruz · Workflow Wan 2.7 Creator
Our Insight: A strong match for fast-turn social scenes, fashion edits, and polished creator storytelling where the motion needs to feel cleaner than a template animation.
Raine
@raine_traveller · Workflow Wan 2.7 Creator
Our Insight: A good fit for narrative world-building, stylized creatures, and fantasy beats where motion smoothness matters as much as the final look.
Pablo Prompt
@pabloprompt · Workflow Wan 2.7 Creator
Our Insight: Useful for ad concepts and product comedy where a clean first impression and a sharper finish matter more than long-form editing depth.
Salma
@salmaaboukarr · Workflow Wan 2.7 Creator
Our Insight: A clear fit for beauty, luxury, and object transformation concepts that need a premium finish inside a short clip.
Night Wolf
@nightwolf_ai · Workflow Wan 2.7 Creator
Our Insight: Strong for automotive, trailer-like pacing, and polished brand-style motion studies where the ending frame needs to land cleanly.
Alex Glocknitzer
@ai_with_glock · Workflow Wan 2.7 Creator
Our Insight: Works well for cinematic social stories where a stronger endpoint and smoother motion make the result feel more intentional.
Karol Życzkowski
@dreamweaver_ai_pl · Workflow Wan 2.7 Creator
Our Insight: A good stylistic match for mythic, character-led, and high-fidelity fantasy scenes that need stronger coherence over the full shot.
aiproductionstudios
@aiproductionstudios · Workflow Wan 2.7 Creator
Our Insight: A useful benchmark for fashion, luxury, and campaign-style shots where the model has to keep the finish premium from start to end.
Technical specifications
Wan 2.7 technical specifications
The basic parameters still matter, but they are not the headline. What matters more is how Wan 2.7 combines output quality with reference-led generation and a real editing story.
Input specifications
- Generation modes: text-to-video, image-to-video, reference-to-video, and prompt-based video editing
- Reference stack: up to 5 files total in reference-to-video, with up to 3 videos and up to 5 images
- Reference extraction: appearance and voice can be pulled from reference clips when applicable
- Editing inputs: the unified editing workflow accepts text, image, and video
- Endpoint control: image-led starts can use first-frame or first-and-last-frame guidance
Output specifications
- Resolution: 720P and 1080P across the current Wan video workflows
- Duration: image-to-video supports 2-15s, reference-to-video supports 2-10s, and first-and-last-frame mode is fixed at 5s
- Format: MP4 with H.264 encoding at 30 fps in the public Wan docs
- Audio behavior: Wan family workflows support audio-on, silent output, auto dubbing, or custom audio sync depending on mode
- Aspect ratio tiers: 16:9, 9:16, 1:1, 4:3, and 3:4 remain the practical publishing set
Workflow support
- Local editing: mask one area, track it through motion, and replace subjects, props, or backgrounds
- Global editing: prompt-led redraws and style shifts for whole-video changes
- Video extension: continue from the first clip forward or the last clip backward
- Prompt rewriting: built-in prompt extension can stabilize scene phrasing before render
- Narrative control: single-shot and multi-shot structures are both part of the Wan family workflow
Built for these workflows
Who uses Wan 2.7: use cases by industry
Wan 2.7 makes the most sense when a team needs more than a first render. It is strongest when the same concept has to survive remix, revision, and direction changes without collapsing.
AI Video for Social Media Marketing
Build one hero idea, then remix it into new hooks, endings, and shot structures instead of regenerating every ad cut from zero.
AI Video for E-Commerce & Product Marketing
Start from product stills, steer the landing frame, and patch only the weak moments when a pack shot, pour shot, or beauty reveal needs another pass.
AI Video for Advertising & Brand Content
Use reference-led generation and prompt-based edits to push one campaign world across multiple scenes while keeping the cast, styling, and camera language recognizable.
AI Video for Film & Animation Pre-Visualization
Test how a scene starts, where it lands, and which motion beat matters before a team commits to a heavier storyboard or editorial workflow.
Your content, your rights
Your content, your rights
Rights language matters when teams want to move from concept clips into paid publishing, client review, or final production.
Wan 2.7 exports on Alici are intended to be watermark-free and available for commercial use, including client projects, subject to current Alici terms, model policy, and project-specific rights review before publication.
Wan 2.7 vs other AI video models
Wan 2.7 vs other AI video models
| Feature | Wan 2.7 | Veo 3.1 | Kling 3.0 | Runway Gen-4.5 |
|---|---|---|---|---|
| Multi-modal inputs | T2V, I2V, R2V, and prompt-based video editing in one Wan 2.7 workflow | Text or image prompts plus up to 3 reference images | Text, image, video, and audio inputs in one workflow | Text-to-video or image-to-video with a strong single-image anchor |
| Audio workflow | Wan family supports custom audio sync, auto dubbing, and audio-on or silent output by mode | Native audio is generated in the same clip | Native dialogue, ambience, and lip-sync stay in-model | Picture-first short takes are usually finished with sound afterward |
| Reference control | Mix image and video references, or steer shots with first-frame and first-last-frame guidance | Reference images plus first- or last-frame constraints | Image, video, and voice references with storyboard guidance | Single-image anchoring holds identity, object, or product direction |
| Consistency target | Identity, motion feel, and narrative pacing across generation, guidance, and edits | Tight shot-level control across short 4 to 8 second scenes | Character, wardrobe, and voice continuity across multi-cut sequences | Polished subject and object continuity in short commercial-style takes |
| Editing workflow | Global redraws, mask-tracked local edits, replacement, and extension make revision first-class | Extend prior generations and steer openings or endings | Extend clips, use motion brush, and guide camera paths | Iterate short shots quickly, then finish in the broader edit stack |
Wan 2.7 differentiates less by one headline spec and more by how wide the workflow has become. If you care about references, revisions, guided endpoints, and targeted edits inside one model family, it has a clearer identity than the earlier Wan landing story.
FAQ
Everything you need to know about Wan 2.7
What is Wan 2.7?
Wan 2.7 is Alibaba's latest Wan video release, positioned around four lanes: text-to-video, image-to-video, reference-to-video, and video editing. The real upgrade is breadth. It is not only about generating a clip, but also about steering and revising it inside the same model story.
What inputs does Wan 2.7 support on Alici?
Think in four lanes: text prompts, image-led starts, reference-led generation, and prompt-based edits to an existing clip. That is a wider input story than a basic prompt-plus-image video model.
What is actually new about Wan 2.7?
The standout change is the wider workflow surface. Alibaba's launch note emphasizes richer emotional arcs, stronger action, reference-based replacement, and both localized and global edits, so 2.7 reads like a workflow expansion, not only a quality bump.
Can Wan 2.7 edit an existing video?
Yes. Video editing is one of the strongest parts of the current Wan 2.7 story. The broader Wan editing workflow already covers whole-video redraws, masked local edits, subject or background replacement, and extension from an existing clip.
Can Wan 2.7 use images and videos together as references?
Yes. The current Wan reference workflow supports mixed reference files, so you can feed both images and video clips together instead of forcing the whole job into one reference type.
How many references can it actually take?
In the current Wan reference-to-video docs, the limit is up to 5 reference files total, with up to 3 videos and the rest images. That lets you split identity, motion, and style across different references.
Can Wan 2.7 preserve a character's voice or speaking style?
The current Wan reference-to-video docs say reference files can be used to extract appearance and voice when applicable. That makes the system more interesting for performance-led scenes, not only for costume or face continuity.
Can it animate between a first frame and a last frame?
Yes. Wan's image-to-video flow already supports first-and-last-frame generation. It is useful when the shot must arrive on one exact pose, product angle, or composition instead of drifting to a generic finish.
Does Wan 2.7 support audio-driven scenes?
The Wan family matters here. Public Wan image-to-video and reference-to-video workflows already support custom audio sync, auto dubbing, and audio-on or silent output depending on mode, so Wan 2.7 sits inside a clearly audio-capable family.
Is Wan 2.7 better for action scenes or dialogue scenes?
Alibaba is trying to claim both. The launch note highlights more organic emotional depth for narrative scenes and more visceral impact for action, so the model is being positioned as a broader storytelling upgrade rather than only a product-shot tool.
What video quality and duration should I expect?
The public Wan docs still center around short-form output. Across the current workflows, 720P and 1080P are the important quality tiers, image-to-video runs up to 15 seconds, reference-to-video runs up to 10 seconds, and first-and-last-frame mode is fixed at 5 seconds.
Is Wan 2.7 good for ads, social, and product videos?
Yes, especially when one concept needs multiple lives. Wan 2.7 is a better fit when a team wants to generate the first cut, swap one weak element, steer a cleaner ending, or rebuild the same performer into a new scene without starting over.
When should I use Wan 2.7 instead of a heavier storyboard workflow?
Use Wan 2.7 when the problem is still shot-first rather than sequence-first. If you need to prove a hook, a reveal, a reference-led character scene, or one precise revision pass, it is a better fit than jumping straight into a larger editorial pipeline.
How is Wan 2.7 different from Veo 3.1, Kling 3.0, or Runway Gen-4.5?
Wan 2.7 sits in a middle lane. Veo leans into high-control short scenes, Kling leans into broader multimodal production systems, and Runway stays strong for commercial image-led motion. Wan 2.7 becomes more compelling when you care about references, revisions, and directed shot changes in one workflow.
Is Wan 2.7 available on Alici now?
Yes. The Wan 2.7 landing route is live on Alici, and the main reason to click it is not only raw generation. It is the broader promise that one Wan setup can now cover generate, reference, edit, and refine workflows in a way earlier Wan pages did not.
Start creating
Generate with Wan 2.7 on Alici
Open the live generator to test Wan 2.7 as a generation and revision workflow, then decide whether the shot needs a fresh prompt, stronger references, or one more surgical edit pass.
Trusted by creators
Trusted by creators
The value shows up in review rounds. Teams can hold onto the scene that works and spend their time changing the part that does not.
Audio-led demos felt less like silent placeholders
I care about whether motion and sound feel connected from the first pass. Wan-style audio workflows already made that useful, and Wan 2.7 made the surrounding generation and revision flow feel stronger, so demos needed less cleanup before I could use them.
We stopped rebuilding the same performer from scratch
The issue for me was never getting one good frame. It was getting the same character to survive into the next variation. Wan 2.7 felt better because I could treat references as part of the system, not as a lucky hint, and that made remixing ideas much faster.
We fixed the weak section instead of rerendering everything
A paid social cut usually fails in one small place, not across the whole spot. The tracked edit workflow mattered because we could replace the weak prop or subject area and keep the rest of the motion that was already working.
Reference-led scenes looked more directed on client review
The difference showed up when we stopped pitching isolated pretty shots and started pitching controlled variations of the same world. Wan 2.7 gave us stronger continuity between references, edits, and fresh generations, so the work felt more directed in review.
The model was useful before we ever opened the timeline
I do not want pre-vis to be a final-grade tool. I want it to answer whether the shot should begin here, cut there, and hold this character logic. Wan 2.7 was more useful once it felt like I could generate, guide, and revise inside one family.
Stills, clips, and fixes all lived in one workflow
Our launches usually start with stills, then a few rough clips, then last-minute revision requests. Wan 2.7 matched that reality better because it could start from a still, pull from references, and then patch the exact area that needed to change.
Audio-led demos felt less like silent placeholders
I care about whether motion and sound feel connected from the first pass. Wan-style audio workflows already made that useful, and Wan 2.7 made the surrounding generation and revision flow feel stronger, so demos needed less cleanup before I could use them.
We stopped rebuilding the same performer from scratch
The issue for me was never getting one good frame. It was getting the same character to survive into the next variation. Wan 2.7 felt better because I could treat references as part of the system, not as a lucky hint, and that made remixing ideas much faster.