0:00 / 0:00

Kling 2.6 Motion Control Tests 🎬 Os dejo algunas pruebas que hice con Kling... La verdad es que sigo pensando que le queda mucho trabajo para llegar a un resultado decente 👀 No hay consistencia en ningún momento, la cara se deforma con cada segundo que pasa por no decir que de cada 10 videos que intento generar, 8 de ellos me dan error 🥲 De momento sirve para hacer videos graciosos para internet o las redes sociales pero en ningún caso para un proyecto profesional 😅 Igualmente, si quieres que te mande los vídeos de referencia que usé para hacer estos vídeos comenta "ARIA" y te los mando por mensajes 💌

Case Snapshot

This 29-second vertical clip is a much clearer motion-control test than a single generated result alone because it places the copied performance directly above the original. The bottom half shows the source actress in a green asymmetrical dress moving through a warm palace-like corridor. The top half shows Kling 2.6 attempting to transfer that motion onto a different woman in a black dress with dark hair. By keeping both versions visible at the same time, the reel turns a subjective "looks okay" judgment into a more rigorous side-by-side evaluation.

What You're Seeing

The split-screen does all the educational work

Without the original underneath, many viewers would overestimate the copied result. The comparison reveals exactly where the model succeeds and fails.

The corridor background is intentionally constant

Matching chandelier-lit architecture helps isolate the test to identity transfer, expression timing, and body motion.

The wardrobe swap is part of the challenge

The model is not only changing the face. It is also translating movement into a different black dress silhouette, which adds difficulty during larger body motion.

Facial emotion begins stable, then gets shakier

In quieter close-up moments the copy can appear convincing. As intensity rises, expression fidelity becomes less reliable.

The full-body spin is the stress test

Dress movement, leg continuity, and timing coherence are much harder than chest-up facial mimicry. That is where the real evaluation happens.

Shot-by-shot Breakdown

Time range Visual content Comparison focus Technical risk Viewer takeaway
00:00-00:07.00 Chest-up split-screen with calm but serious facial performance. Identity remap and baseline expression sync. Low to moderate. The copy looks plausible at first glance.
00:07.00-00:18.00 More aggressive expression and arm-led motion. Emotion timing and facial deformation resistance. Moderate to high. Differences between original and copy become easier to spot.
00:18.00-00:29.23 Full-body dress movement and spinning corridor choreography. Skirt physics, limb continuity, and whole-body transfer. Highest. The copy is useful as a demo, not yet as production-grade replacement.

Why This Comparison Works

It removes hype ambiguity

Many AI demos look strong until the original reference is shown. This format forces honesty.

The labels make the experiment immediately readable

"COPIA" and "ORIGINAL" remove confusion and let the viewer analyze instead of guessing.

The scene is difficult enough to be meaningful

This is not just a head nod or gentle hand wave. The performance escalates into a dress-heavy full-body test that genuinely challenges the model.

The environment stays elegant and simple

By avoiding visual clutter, the clip keeps attention on the motion-control question itself.

Failure Points

Facial intensity drifts first

As the original performance becomes more emotional, the copied version can overcook or flatten facial shapes.

Dress dynamics are expensive for the model

Asymmetrical draped garments create harder motion geometry than tight, static clothing.

Full-body synchronization exposes timing gaps

Even if the copied motion is "close," split-screen makes small offsets feel more obvious.

This is why the creator still calls it a social-content tool

The output is interesting and useful for demos, but not dependable enough for serious performance-critical production.

Prompt Breakdown

The scene choice is smart

An elegant corridor with one performer keeps the visual language rich while avoiding the chaos of crowds or complex action blocking.

The wardrobe swap must stay readable

Green original and black copy help viewers track the comparison faster than subtle costume differences would.

The split-screen framing is essential

This is not just one stylistic option. It is the mechanism that makes the reel educational.

How to Recreate It

Step 1: Choose one strong original motion clip

It should contain both calm and difficult sections so the model can be tested across different intensity levels.

Step 2: Keep the environment comparable

Use a copied scene that preserves the same background logic so viewers focus on motion transfer rather than scene variation.

Step 3: Make the copied identity visually distinct

A wardrobe or hair-color change helps viewers tell top from bottom instantly.

Step 4: Present both versions together

Do not rely on memory. Split-screen is much more rigorous than showing them sequentially.

Step 5: Include one hard full-body moment

If the reel never leaves chest-up expression, you are not really testing motion control deeply enough.

Growth Playbook

3 opening hook lines

  • This is a much better AI motion-control test because you can see the copy and the original at the same time.
  • Kling 2.6 looks decent in still moments and much weaker once the dress starts moving.
  • Side-by-side comparisons are where motion-control hype starts breaking down.

4 caption templates

  1. Hook: "Copy vs original." Value: "This split-screen makes it obvious what Kling 2.6 can imitate and where it still breaks." Question: "Would you trust this for a real project?" CTA: "Comment ARIA if you want the prompts."
  2. Hook: "Motion control only feels real when you show the source clip too." Value: "The face holds up better than the dress dynamics and body continuity." Question: "What do you think fails first?" CTA: "Write ARIA below."
  3. Hook: "The top version is the AI copy, the bottom is the real performance." Value: "This is the kind of comparison creators need before calling a model production-ready." Question: "Do you want more side-by-side tests?" CTA: "Type ARIA."
  4. Hook: "The close-ups almost pass, the full-body part is where the model gets exposed." Value: "That is why split-screen motion tests are much more useful than best-frame showcases." Question: "Should I compare Kling 3.0 next?" CTA: "Comment ARIA."

Hashtag strategy

Broad: #AIVideo #KlingAI #MotionControl #AIComparison. These support general discovery.

Mid-tier: #Kling26 #MotionControlTest #CopyVsOriginal #AIDemo. These fit the actual format more closely.

Niche long-tail: #KlingMotionCopy #OriginalVsCopyAI #DressMotionTest #AIMotionControlComparison. These target viewers researching this exact workflow.

FAQ

Why is split-screen better than showing one generated result?

Because it removes memory bias and lets viewers judge the transfer quality directly in real time.

What part of the copied motion fails most visibly?

Large expression bursts and full-body dress movement are the highest-risk areas.

Why keep the same corridor background in both versions?

It isolates the test to motion and identity transfer instead of adding scene differences.

Is this format useful even if the result is imperfect?

Yes, because honest side-by-side comparisons are more valuable to creators than polished hype clips.

What would make this test even stronger?

Including several difficulty levels, from facial expression to full-body motion, which this clip already starts to do.