Kling 2.6 Motion Control Tests 🎬 Os dejo algunas pruebas que hice con Kling... La verdad es que sigo pensando que le queda mucho trabajo para llegar a un resultado decente 👀 No hay consistencia en ningún momento, la cara se deforma con cada segundo que pasa por no decir que de cada 10 videos que intento generar, 8 de ellos me dan error 🥲 De momento sirve para hacer videos graciosos para internet o las redes sociales pero en ningún caso para un proyecto profesional 😅 Igualmente, si quieres que te mande los vídeos de referencia que usé para hacer estos vídeos comenta "ARIA" y te los mando por mensajes 💌
Case Snapshot
This reel is a direct motion-control benchmark. In the bottom half, an older man in a suit performs broad body movements in a warm hotel-lobby-like space. In the top half, a young woman in a black wrap top and white skirt attempts to copy those exact motions under a “Kling 2.6 Motion Control” label, making the video a clear visual test of how well the system transfers movement from one body to another.
What You're Seeing
The layout removes ambiguity
By showing original and copied motion at the same time, the reel makes it easy to evaluate the tool instead of relying on captions alone.
The original performer is intentionally mismatched
Using an older man in a suit as the reference and a young woman as the copied target is a stress test. It reveals how the model handles movement style transfer across very different bodies and wardrobes.
The warm luxury lobby is useful as a control environment
Because both panels share the same kind of interior, viewers can focus on motion quality instead of environmental differences.
The copied motion is readable even when it is imperfect
That is what makes the test interesting. The woman is clearly following the same gesture logic, but the body language may still feel uncanny or unstable.
The reel works because it is honest
The caption openly says the results are not professional quality yet. The comparison layout visually supports that critique instead of overselling the tool.
This is high-value creator content
People interested in motion control want to see failures as much as successes. Honest test reels like this are more useful than cherry-picked perfect clips.
Section-by-section breakdown
| Time range | Visual content | Test role | What it demonstrates | Viewer takeaway |
|---|---|---|---|---|
| 00:00-00:04 (estimated) | Split view establishes original and copied performers | Benchmark setup | Shows the exact comparison framework | The viewer knows how to judge the clip immediately |
| 00:04-00:08 (estimated) | Arm and torso positions begin to mirror across panels | Transfer readability test | Proves the system follows broad pose direction | Motion control is working at a basic level |
| 00:08-00:12 (estimated) | Wider gestures and steps reveal awkwardness | Failure-exposure phase | Shows where body consistency starts degrading | The model is not stable enough for polished work |
| 00:12-00:19 (estimated) | Continued mimicry with visible uncanny drift | Final verdict segment | Highlights the gap between fun experiment and professional result | Useful for social content, not yet reliable for serious production |
Prompt Breakdown
The comparison layout itself is part of the content
This is not just one video output. The split labeling and motion-control badge are essential to how the audience interprets the test.
Motion transfer needs a simple room and stable camera
A warm lobby with fixed framing allows the body-language differences to stand out without distraction.
Broad gestures expose the model faster
Arm sweeps, hip shifts, and stance changes are useful because they show whether the system actually understands motion or just approximates silhouettes.
Body mismatch is a deliberate stress test
Transferring a middle-aged suited man’s movement onto a young woman is much harder than using a similar reference. That is why the test is useful.
The goal is evaluation, not illusion
The copied panel does not have to look perfect. It only has to reveal how far the model currently gets and where it breaks.
How to Recreate It
Step 1: Pick a difficult reference-target pair
Choose subjects with different age, clothing, and body language so the transfer test becomes meaningful.
Step 2: Use one shared environment
Keeping both panels in the same or similar location removes environmental noise from the comparison.
Step 3: Label the top and bottom clearly
Viewers should instantly know which panel is the original and which is the copied output.
Step 4: Use gestures with readable body mechanics
Shoulder shifts, arm sweeps, stance changes, and steps reveal transfer quality better than tiny hand motions.
Step 5: Keep the camera stable
Fixed framing makes it easier to compare posture, rhythm, and deformation over time.
Step 6: Let the imperfections show
Do not hide every failure. Honest benchmark content is more useful and more believable.
Step 7: Frame the conclusion honestly
Tell viewers whether the result is only funny-social quality or whether it is actually production-ready.
Step 8: Offer the reference clips as value
If the audience wants to test the same workflow, sharing the reference motions is a natural CTA.
Growth Playbook
Three opening hook lines
- The best motion-control demos are the ones that are honest enough to show where the model fails.
- If you want to stress-test AI motion transfer, do not use a matching body. Use a completely different performer.
- This kind of split comparison is much more useful than a flashy standalone clip because it gives viewers a real benchmark.
Four caption templates
- Hook: “Here are some Kling 2.6 Motion Control tests I did.” Value: “It copies the movement, but the consistency still breaks fast.” Question: “Do you think this is usable yet?” CTA: “Comment ARIA and I’ll send the reference videos.”
- Hook: “Motion control looks promising until you compare original and copy side by side.” Value: “That is where the deformations become obvious.” Question: “Should I test another generator?” CTA: “Save this benchmark.”
- Hook: “This is fun for internet content, but not for serious production yet.” Value: “The split-screen test makes that pretty clear.” Question: “Do you want more honest AI tool reviews?” CTA: “Drop the keyword below.”
- Hook: “The fastest way to evaluate motion transfer is a mismatched body test like this.” Value: “If it survives here, it is actually strong.” Question: “Want the exact setup?” CTA: “Comment and I’ll share more.”
Hashtag strategy
Broad: #AIVideo, #MotionControl, #AIComparison. These reach the broadest relevant audiences interested in AI video performance.
Mid-tier: #Kling26, #BenchmarkTest, #AICreator, #BodySwapAI. These fit creator-led model evaluation content.
Niche long-tail: #KlingMotionControlTest, #CopyVsOriginalAI, #MotionTransferBenchmark, #ReferenceVideoTest. These map directly to the exact clip format.
FAQ
Why use such different performers in the test?
Because it exposes the limits of motion transfer much faster than using two similar bodies.
What is the most useful part of the layout?
The simultaneous original-versus-copy view, because it makes drift and deformation impossible to hide.
Why keep the environment the same in both halves?
It isolates the motion problem so viewers can judge the body transfer itself.
What usually breaks first in motion-control tests like this?
Facial consistency, body proportions during larger gestures, and the timing of limb placement often degrade first.
Why is this good social content even if it fails?
Because people find uncanny or imperfect AI outputs interesting, funny, and discussion-worthy.
Is this useful for professional projects yet?
Based on this type of result, it looks more suitable for experiments and social posts than for reliable production work.