Emociones con IA 🥲 Hoy quise poner a prueba los mejores generadores de vídeo con IA para ver si de verdad son capaces de transmitir diferentes emociones 👀 Usé la misma imagen y el mismo prompt para generarlas, y aun así cada uno me da un resultado distinto… Os dejo los testeos que hice para que podáis juzgar vosotros mismos qué generador lo hace mejor 😋 Y, por cierto, mañana Kling lanza su nueva versión: Kling 3.0. Pronto tendréis nuevos vídeos poniéndolo a prueba Y como siempre, si comentas “ARIA”, te paso todos los prompts de las imágenes y de las emociones que usé 💌
How soy_aria_cruz Made This Laughter Emotion Test AI Video and How to Recreate It
A laughter benchmark reveals a different weakness than fear or sadness
This clip is part of an AI emotion test, but unlike the terror examples, this one focuses on laughter. That changes what the viewer is judging. Instead of tears, panic, or fear tension, the benchmark becomes about warmth, joy, teeth realism, cheek lift, eye compression, and whether the laughter escalates naturally. The woman stays in a tight selfie-style close-up with glasses, hoop earrings, and a white mesh top while the label identifies the model as Kling 2.6 and the emotion as “RISAS.”
For creators, this matters because positive emotion is harder than it looks. A smile can be easy, but believable laughter often breaks in the mouth shape, teeth, or facial symmetry. That is why this kind of test is useful. It asks whether the model can move from a soft smile into a full laugh without turning the face uncanny.
What You're Seeing
The setup is intentionally minimal
The whole scene is built to remove excuses. There is no dramatic environment, no costume change, and no storytelling distraction. It is just one face, close to camera, in simple daylight against a neutral backdrop. That means every tiny problem becomes visible: how the glasses deform, whether the eyes stay aligned, and how the open mouth changes under motion.
The emotion arc
The clip starts with a friendly smile, then builds into a stronger grin, then reaches a full open-mouth laugh. The best part of this format is that it tests transition quality, not just the final expression. A model can fake one still frame of happiness, but sustaining the movement from smile to laugh is much harder.
Shot-by-shot breakdown
| Time range | Visual content | Shot language | Lighting & color tone | Viewer intent |
|---|---|---|---|---|
| 0:00-0:02 (estimated) | Soft smile with direct eye contact and visible daylight reflections in the glasses | Locked close selfie portrait | Neutral gray background, strong side daylight | Establish baseline realism |
| 0:02-0:04 (estimated) | Smile grows, teeth show, cheeks lift | Same framing, tiny facial motion only | Daylight stays consistent | Test believable emotional build |
| 0:04-0:06 (estimated) | Open-mouth laugh begins, head tips back slightly | Extreme facial-performance close-up | Reflections intensify in the lenses | Stress mouth geometry and identity stability |
| 0:06-0:10 (estimated) | Full sustained laugh with visible teeth and narrowed eyes | No cut, no zoom, pure expression hold | Stable neutral background and natural skin highlights | Judge whether joy remains human-looking under motion |
How to Recreate It
How to run a clean laughter benchmark
- Start with a neutral close-up portrait that already has stable glasses, eyes, and mouth shape.
- Keep the background plain so the viewer only evaluates the face.
- Use a single lighting setup and do not let the environment shift during the clip.
- Prompt a real laughter arc, not just “smile happily.”
- Make sure the clip lasts long enough to show the transition from small smile to full laugh.
- Watch the teeth, jaw, and lens reflections carefully when evaluating results.
- Label the model in-frame so viewers can compare multiple clips later.
- Use the same source face across different tools if you want a fair benchmark series.
What usually breaks first
In laughter clips, the mouth often breaks before the rest of the face. Teeth can become too uniform, the jaw can stretch unnaturally, and the eye symmetry can slip once the smile reaches maximum intensity. That is why this benchmark is useful.
Growth Playbook
3 opening hook lines
- If an AI can fake a real laugh, it is getting closer to usable human performance.
- This is a better test than a pretty portrait, because laughter exposes the mouth fast.
- Same face, same prompt, one question: does Kling 2.6 actually feel human here?
4 caption templates
1. Hook: I tested whether AI video tools can actually transmit laughter, not just smiling. Value: this one is Kling 2.6. Question: Does it look real to you? CTA: Comment ARIA if you want the prompt pack.
2. Hook: Joy is harder than it looks for AI faces. Value: once the mouth opens, weak models fall apart fast. Question: Should I compare this against Kling 3.0 next? CTA: Tell me below.
3. Hook: I wanted to test emotions with the exact same image and prompt. Value: laughter is one of the clearest ways to spot facial issues. Question: Which part breaks first for you, eyes or mouth? CTA: Drop your take.
4. Hook: Fear benchmarks get attention, but laughter benchmarks are even more revealing. Value: they test warmth and realism instead of only dramatic tension. Question: Want more emotion tests? CTA: Say yes in comments.
Hashtag strategy
Broad: #AIVideo, #AIEmotion, #VideoGeneration.
Mid-tier: #LaughterTest, #FacialPerformance, #ModelComparison, #CreatorBenchmark.
Niche long-tail: #Kling26, #AIEmotionTest, #LaughterBenchmark, #SmileToLaugh.
FAQ
Why is laughter a useful AI video benchmark?
Because it stresses teeth, jaw shape, eye compression, and emotional progression all at once.
What makes this Kling 2.6 clip challenging?
The glasses reflections and open-mouth laugh make consistency errors easier to spot.
Should I benchmark only peak emotion frames?
No, the transition into the emotion is often more revealing than the peak expression itself.
What usually fails first in AI laughter videos?
Mouth geometry and dental realism often break before the lighting or pose does.
Can positive-emotion tests perform well on social too?
Yes, they are easier to watch and easier to share while still being useful creator benchmarks.