0:00 / 0:00

Emociones con IA 🥲 Hoy quise poner a prueba los mejores generadores de vídeo con IA para ver si de verdad son capaces de transmitir diferentes emociones 👀 Usé la misma imagen y el mismo prompt para generarlas, y aun así cada uno me da un resultado distinto… Os dejo los testeos que hice para que podáis juzgar vosotros mismos qué generador lo hace mejor 😋 Y, por cierto, mañana Kling lanza su nueva versión: Kling 3.0. Pronto tendréis nuevos vídeos poniéndolo a prueba Y como siempre, si comentas “ARIA”, te paso todos los prompts de las imágenes y de las emociones que usé 💌

Case Snapshot

This 8.2-second vertical clip is a micro-expression benchmark labeled "RISA MINIMA" and "SEEDANCE 1.5 PRO." The setup is nearly identical to a soft bedroom portrait: a brunette woman under white bedding, glasses aligned cleanly on her face, cheek resting on her hand, and warm daylight washing the frame. The entire point is to watch a tiny smile emerge and settle naturally. That narrow target makes the clip useful. It is not trying to prove broad motion capability. It is trying to prove that subtle facial emotion can survive generation.

What You're Seeing

The duvet acts like a built-in portrait frame

The white bedding isolates the face and helps the viewer focus on mouth, cheeks, and eyes.

The glasses are an important stress test

If the face geometry shifts too much, the glasses will give the problem away immediately. Here they largely stay stable.

The hand-on-cheek pose reduces noise

Because the body remains almost fixed, the clip can be judged primarily on subtle facial change.

The smile arc is clearer than neutral but still restrained

The expression grows a bit warmer than the Kling sample, but it still stays within believable portrait territory.

The on-screen label frames the clip as a benchmark

Without the "RISA MINIMA / SEEDANCE 1.5 PRO" tag, this might read like a cozy lifestyle portrait. The label tells viewers to judge model quality.

Shot-by-shot Breakdown

Time range Visual content Emotion change Technical read Viewer effect
00:00-00:02.00 Neutral-soft portrait under the duvet with steady eye contact. Calm baseline. High stability in glasses and facial structure. Immediately intimate and believable.
00:02.00-00:04.50 Lips rise gently while the pose remains locked. Minimal smile begins. Good cheek and eye coordination with low drift. Feels warm rather than posed.
00:04.50-00:06.50 The smile becomes more open and visible. Soft happiness. Still stable, with controlled mouth geometry. Most human-feeling phase of the clip.
00:06.50-00:08.22 Final settled smile with bright eyes and cozy bedding frame. Gentle joy. Strong ending because structure stays intact. Leaves a positive benchmark impression.

Why It Works

It asks the model to do one difficult thing well

Subtle facial emotion is hard. This clip succeeds because it narrows the challenge instead of overloading it.

The scene design removes distractions

No complex background, no heavy body motion, no aggressive speech. Just a face, a pose, and a smile progression.

The output feels more useful than a louder emotional test

Anyone trying to build realistic portrait or influencer content benefits more from controlled smile behavior than from extreme theatrical emotion.

Model Comparison Angle

This is why benchmark labels matter

When the same source image and same emotion are run across multiple generators, the label helps viewers compare without confusion.

Seedance 1.5 Pro reads slightly warmer here

In this sample the smile evolves into a cleaner and more naturally open ending than a flatter or more static version would.

Micro-expression tests are better than big-motion tests for face models

They reveal whether a generator understands emotional nuance rather than only coarse movement.

Prompt Breakdown

The bedding is not decoration

It simplifies the silhouette and creates a clean portrait envelope around the face.

The glasses increase benchmark value

Stable lens shape and alignment help prove whether the face geometry remains coherent as the smile changes.

The emotional ramp must stay small

If the smile grows too fast or too far, the clip stops being a micro-emotion test and starts becoming a different category of expression benchmark.

How to Recreate It

Step 1: Use a face with strong eye detail and clean glasses

That makes stability easier to judge frame by frame.

Step 2: Lock the pose almost completely

Hand on cheek and body supported by bedding remove unnecessary motion noise.

Step 3: Ask for one specific emotional level

"Minimal smile" is a stronger benchmark than a vague word like "happy."

Step 4: Keep the lighting soft and natural

Daylight portraits make face quality easier to evaluate than dramatic stylized lighting.

Step 5: Label the model on screen

If you are comparing outputs, the label turns a pretty clip into a useful benchmark asset.

Growth Playbook

3 opening hook lines

  • This is a better AI benchmark than a flashy motion test because the face has nowhere to hide.
  • I used the same image and same emotion across different generators, and this is one of the cleanest results.
  • Minimal smile is still one of the hardest and most useful AI portrait tests.

4 caption templates

  1. Hook: "Same image, same prompt, different AI video model." Value: "This Seedance 1.5 Pro sample handles a minimal smile with surprisingly stable face geometry." Question: "Which model do you think is winning this benchmark?" CTA: "Comment ARIA and I'll send the prompts."
  2. Hook: "Not every AI face test needs drama." Value: "Subtle smiles tell you more about model quality than loud motion effects." Question: "Would you use this for portrait content?" CTA: "Write ARIA below."
  3. Hook: "This is the kind of benchmark that actually matters for creators." Value: "If a model can do soft emotion well, it becomes much more useful for realistic videos." Question: "What emotion should I test next?" CTA: "Type ARIA."
  4. Hook: "The face looks simple, but this is a difficult test." Value: "Glasses, skin, eye warmth, and mouth shape all have to remain coherent while the smile grows." Question: "Do you want more model comparisons like this?" CTA: "Comment ARIA."

Hashtag strategy

Broad: #AIVideo #AIEmotions #PortraitAI #ModelComparison. These support general discovery.

Mid-tier: #Seedance15Pro #MinimalSmile #AIFaceTest #EmotionBenchmark. These fit the actual use case more closely.

Niche long-tail: #SeedanceEmotionTest #RisaMinima #AIMicroExpression #PortraitBenchmarkAI. These target viewers specifically researching facial emotion generation.

FAQ

Why is a minimal smile such a useful benchmark?

Because it requires small but coordinated changes across the mouth, cheeks, and eyes without any obvious room for overacting.

Why keep the pose almost completely static?

So the viewer can evaluate emotional rendering instead of getting distracted by body motion or camera tricks.

Why are glasses valuable in this test?

They make any face-geometry instability immediately visible.

What would weaken this benchmark?

Adding speech, dramatic laughter, or strong head turns would make it harder to isolate the model's emotional control.

Why label the model directly on the video?

It turns the clip into a clean comparison asset when placed alongside outputs from other generators.